Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
NASA Astrophysics Data System (ADS)
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
Chaos and random matrices in supersymmetric SYK
NASA Astrophysics Data System (ADS)
Hunter-Jones, Nicholas; Liu, Junyu
2018-05-01
We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel, E-mail: marcel.novaes@gmail.com
2015-10-15
We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.
Data-driven probability concentration and sampling on manifold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2016-09-15
A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less
Semistochastic approach to many electron systems
NASA Astrophysics Data System (ADS)
Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.
1992-08-01
A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.
NASA Astrophysics Data System (ADS)
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
The feasibility and stability of large complex biological networks: a random matrix approach.
Stone, Lewi
2018-05-29
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco
2014-06-01
Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.
Correlation and volatility in an Indian stock market: A random matrix approach
NASA Astrophysics Data System (ADS)
Kulkarni, Varsha; Deo, Nivedita
2007-11-01
We examine the volatility of an Indian stock market in terms of correlation of stocks and quantify the volatility using the random matrix approach. First we discuss trends observed in the pattern of stock prices in the Bombay Stock Exchange for the three-year period 2000 2002. Random matrix analysis is then applied to study the relationship between the coupling of stocks and volatility. The study uses daily returns of 70 stocks for successive time windows of length 85 days for the year 2001. We compare the properties of matrix C of correlations between price fluctuations in time regimes characterized by different volatilities. Our analyses reveal that (i) the largest (deviating) eigenvalue of C correlates highly with the volatility of the index, (ii) there is a shift in the distribution of the components of the eigenvector corresponding to the largest eigenvalue across regimes of different volatilities, (iii) the inverse participation ratio for this eigenvector anti-correlates significantly with the market fluctuations and finally, (iv) this eigenvector of C can be used to set up a Correlation Index, CI whose temporal evolution is significantly correlated with the volatility of the overall market index.
Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers
NASA Astrophysics Data System (ADS)
Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos
2017-01-01
We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.
Constructing acoustic timefronts using random matrix theory.
Hegewisch, Katherine C; Tomsovic, Steven
2013-10-01
In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Finite-time stability of neutral-type neural networks with random time-varying delays
NASA Astrophysics Data System (ADS)
Ali, M. Syed; Saravanan, S.; Zhu, Quanxin
2017-11-01
This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.
Conditional random matrix ensembles and the stability of dynamical systems
NASA Astrophysics Data System (ADS)
Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.
2015-08-01
Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.
Group identification in Indonesian stock market
NASA Astrophysics Data System (ADS)
Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong
2016-08-01
The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.
NASA Astrophysics Data System (ADS)
Méndez-Bermúdez, J. A.; Gopar, Victor A.; Varga, Imre
2010-09-01
We study numerically scattering and transport statistical properties of the one-dimensional Anderson model at the metal-insulator transition described by the power-law banded random matrix (PBRM) model at criticality. Within a scattering approach to electronic transport, we concentrate on the case of a small number of single-channel attached leads. We observe a smooth crossover from localized to delocalized behavior in the average-scattering matrix elements, the conductance probability distribution, the variance of the conductance, and the shot noise power by varying b (the effective bandwidth of the PBRM model) from small (b≪1) to large (b>1) values. We contrast our results with analytic random matrix theory predictions which are expected to be recovered in the limit b→∞ . We also compare our results for the PBRM model with those for the three-dimensional (3D) Anderson model at criticality, finding that the PBRM model with bɛ[0.2,0.4] reproduces well the scattering and transport properties of the 3D Anderson model.
Significance Testing in Confirmatory Factor Analytic Models.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; Hocevar, Dennis
Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
NASA Astrophysics Data System (ADS)
Duan, Xueyang
The objective of this dissertation is to develop forward scattering models for active microwave remote sensing of natural features represented by layered media with rough interfaces. In particular, soil profiles are considered, for which a model of electromagnetic scattering from multilayer rough surfaces with or without buried random media is constructed. Starting from a single rough surface, radar scattering is modeled using the stabilized extended boundary condition method (SEBCM). This method solves the long-standing instability issue of the classical EBCM, and gives three-dimensional full wave solutions over large ranges of surface roughnesses with higher computational efficiency than pure numerical solutions, e.g., method of moments (MoM). Based on this single surface solution, multilayer rough surface scattering is modeled using the scattering matrix approach and the model is used for a comprehensive sensitivity analysis of the total ground scattering as a function of layer separation, subsurface statistics, and sublayer dielectric properties. The buried inhomogeneities such as rocks and vegetation roots are considered for the first time in the forward scattering model. Radar scattering from buried random media is modeled by the aggregate transition matrix using either the recursive transition matrix approach for spherical or short-length cylindrical scatterers, or the generalized iterative extended boundary condition method we developed for long cylinders or root-like cylindrical clusters. These approaches take the field interactions among scatterers into account with high computational efficiency. The aggregate transition matrix is transformed to a scattering matrix for the full solution to the layered-medium problem. This step is based on the near-to-far field transformation of the numerical plane wave expansion of the spherical harmonics and the multipole expansion of plane waves. This transformation consolidates volume scattering from the buried random medium with the scattering from layered structure in general. Combined with scattering from multilayer rough surfaces, scattering contributions from subsurfaces and vegetation roots can be then simulated. Solutions of both the rough surface scattering and random media scattering are validated numerically, experimentally, or both. The experimental validations have been carried out using a laboratory-based transmit-receive system for scattering from random media and a new bistatic tower-mounted radar system for field-based surface scattering measurements.
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, C; Chen, L; Jia, X
2016-06-15
Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less
Quantum simulation of an ultrathin body field-effect transistor with channel imperfections
NASA Astrophysics Data System (ADS)
Vyurkov, V.; Semenikhin, I.; Filippov, S.; Orlikovsky, A.
2012-04-01
An efficient program for the all-quantum simulation of nanometer field-effect transistors is elaborated. The model is based on the Landauer-Buttiker approach. Our calculation of transmission coefficients employs a transfer-matrix technique involving the arbitrary precision (multiprecision) arithmetic to cope with evanescent modes. Modified in such way, the transfer-matrix technique turns out to be much faster in practical simulations than that of scattering-matrix. Results of the simulation demonstrate the impact of realistic channel imperfections (random charged centers and wall roughness) on transistor characteristics. The Landauer-Buttiker approach is developed to incorporate calculation of the noise at an arbitrary temperature. We also validate the ballistic Landauer-Buttiker approach for the usual situation when heavily doped contacts are indispensably included into the simulation region.
Money creation process in a random redistribution model
NASA Astrophysics Data System (ADS)
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
NASA Astrophysics Data System (ADS)
Warchoł, Piotr
2018-06-01
The public transportation system of Cuernavaca, Mexico, exhibits random matrix theory statistics. In particular, the fluctuation of times between the arrival of buses on a given bus stop, follows the Wigner surmise for the Gaussian unitary ensemble. To model this, we propose an agent-based approach in which each bus driver tries to optimize his arrival time to the next stop with respect to an estimated arrival time of his predecessor. We choose a particular form of the associated utility function and recover the appropriate distribution in numerical experiments for a certain value of the only parameter of the model. We then investigate whether this value of the parameter is otherwise distinguished within an information theoretic approach and give numerical evidence that indeed it is associated with a minimum of averaged pairwise mutual information.
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
A random matrix approach to language acquisition
NASA Astrophysics Data System (ADS)
Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos
2009-12-01
Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays
NASA Astrophysics Data System (ADS)
Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.
2014-12-01
Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.
A time-series approach to dynamical systems from classical and quantum worlds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fossion, Ruben
2014-01-08
This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate time series.
Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; Rocha Dos Santos, Manuela; França-Grohmann, Isabela Lima; Casarin, Renato Corrêa Viana; Casati, Márcio Zaffalon; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio
2017-12-01
Considering xenogeneic collagen matrix (CM) and enamel matrix derivative (EMD) characteristics, it is suggested that their combination could promote superior clinical outcomes in root coverage procedures. Thus, the aim of this parallel, double-masked, dual-center, randomized clinical trial is to evaluate clinical outcomes after treatment of localized gingival recession (GR) by a coronally advanced flap (CAF) combined with CM and/or EMD. Sixty-eight patients presenting one Miller Class I or II GRs were randomly assigned to receive either CAF (n = 17); CAF + CM (n = 17); CAF + EMD (n = 17), or CAF + CM + EMD (n = 17). Recession height, probing depth, clinical attachment level, and keratinized tissue width and thickness were measured at baseline and 90 days and 6 months after surgery. The obtained root coverage was 68.04% ± 24.11% for CAF; 87.20% ± 15.01% for CAF + CM; 88.77% ± 20.66% for CAF + EMD; and 91.59% ± 11.08% for CAF + CM + EMD after 6 months. Groups that received biomaterials showed greater values (P <0.05). Complete root coverage (CRC) for CAF + EMD was 70.59%, significantly superior to CAF alone (23.53%); CAF + CM (52.94%), and CAF + CM + EMD (51.47%) (P <0.05). Keratinized tissue thickness gain was significant only in CM-treated groups (P <0.05). The three approaches are superior to CAF alone for root coverage. EMD provides highest levels of CRC; however, the addition of CM increases gingival thickness. The combination approach does not seem justified.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Aspects géométriques et intégrables des modèles de matrices aléatoires
NASA Astrophysics Data System (ADS)
Marchal, Olivier
2010-12-01
This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
NASA Astrophysics Data System (ADS)
Sharan, A. M.; Sankar, S.; Sankar, T. S.
1982-08-01
A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
CR-Calculus and adaptive array theory applied to MIMO random vibration control tests
NASA Astrophysics Data System (ADS)
Musella, U.; Manzato, S.; Peeters, B.; Guillaume, P.
2016-09-01
Performing Multiple-Input Multiple-Output (MIMO) tests to reproduce the vibration environment in a user-defined number of control points of a unit under test is necessary in applications where a realistic environment replication has to be achieved. MIMO tests require vibration control strategies to calculate the required drive signal vector that gives an acceptable replication of the target. This target is a (complex) vector with magnitude and phase information at the control points for MIMO Sine Control tests while in MIMO Random Control tests, in the most general case, the target is a complete spectral density matrix. The idea behind this work is to tailor a MIMO random vibration control approach that can be generalized to other MIMO tests, e.g. MIMO Sine and MIMO Time Waveform Replication. In this work the approach is to use gradient-based procedures over the complex space, applying the so called CR-Calculus and the adaptive array theory. With this approach it is possible to better control the process performances allowing the step-by-step Jacobian Matrix update. The theoretical bases behind the work are followed by an application of the developed method to a two-exciter two-axis system and by performance comparisons with standard methods.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
The G matrix under fluctuating correlational mutation and selection.
Revell, Liam J
2007-08-01
Theoretical quantitative genetics provides a framework for reconstructing past selection and predicting future patterns of phenotypic differentiation. However, the usefulness of the equations of quantitative genetics for evolutionary inference relies on the evolutionary stability of the additive genetic variance-covariance matrix (G matrix). A fruitful new approach for exploring the evolutionary dynamics of G involves the use of individual-based computer simulations. Previous studies have focused on the evolution of the eigenstructure of G. An alternative approach employed in this paper uses the multivariate response-to-selection equation to evaluate the stability of G. In this approach, I measure similarity by the correlation between response-to-selection vectors due to random selection gradients. I analyze the dynamics of G under several conditions of correlational mutation and selection. As found in a previous study, the eigenstructure of G is stabilized by correlational mutation and selection. However, over broad conditions, instability of G did not result in a decreased consistency of the response to selection. I also analyze the stability of G when the correlation coefficients of correlational mutation and selection and the effective population size change through time. To my knowledge, no prior study has used computer simulations to investigate the stability of G when correlational mutation and selection fluctuate. Under these conditions, the eigenstructure of G is unstable under some simulation conditions. Different results are obtained if G matrix stability is assessed by eigenanalysis or by the response to random selection gradients. In this case, the response to selection is most consistent when certain aspects of the eigenstructure of G are least stable and vice versa.
Network trending; leadership, followership and neutrality among companies: A random matrix approach
NASA Astrophysics Data System (ADS)
Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.
2016-11-01
In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.
2013-09-01
investigated using recent results from random matrix theory. Equivalence is established between PMR networks without direct-path signals and passive...approach ignores a potentially useful source of information about the unknown transmit signals . This is particularly true in high-DNR scenarios, in...which the direct-path signal provides a high-quality reference that can be used for (noisy) matched filtering, as in the conventional approach. Thus
NASA Astrophysics Data System (ADS)
Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor
2018-04-01
In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.
Compressive sensing based wireless sensor for structural health monitoring
NASA Astrophysics Data System (ADS)
Bao, Yuequan; Zou, Zilong; Li, Hui
2014-03-01
Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2013-01-01
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Random matrix approach to group correlations in development country financial market
NASA Astrophysics Data System (ADS)
Qohar, Ulin Nuha Abdul; Lim, Kyuseong; Kim, Soo Yong; Liong, The Houw; Purqon, Acep
2015-12-01
Financial market is a borderless economic activity, everyone in this world has the right to participate in stock transactions. The movement of stocks is interesting to be discussed in various sciences, ranging from economists to mathe-maticians try to explain and predict the stock movement. Econophysics, as a discipline that studies the economic behavior using one of the methods in particle physics to explain stock movement. Stocks tend to be unpredictable probabilistic regarded as a probabilistic particle. Random Matrix Theory is one method used to analyze probabilistic particle is used to analyze the characteristics of the movement in the stock collection of developing country stock market shares of the correlation matrix. To obtain the characteristics of the developing country stock market and use characteristics of stock markets of developed countries as a parameter for comparison. The result shows market wide effect is not happened in Philipine market and weak in Indonesia market. Contrary, developed country (US) has strong market wide effect.
Global financial indices and twitter sentiment: A random matrix theory approach
NASA Astrophysics Data System (ADS)
García, A.
2016-11-01
We use Random Matrix Theory (RMT) approach to analyze the correlation matrix structure of a collection of public tweets and the corresponding return time series associated to 20 global financial indices along 7 trading months of 2014. In order to quantify the collection of tweets, we constructed daily polarity time series from public tweets via sentiment analysis. The results from RMT analysis support the fact of the existence of true correlations between financial indices, polarities, and the mixture of them. Moreover, we found a good agreement between the temporal behavior of the extreme eigenvalues of both empirical data, and similar results were found when computing the inverse participation ratio, which provides an evidence about the emergence of common factors in global financial information whether we use the return or polarity data as a source. In addition, we found a very strong presumption that polarity Granger causes returns of an Indonesian index for a long range of lag trading days, whereas for Israel, South Korea, Australia, and Japan, the predictive information of returns is also presented but with less presumption. Our results suggest that incorporating polarity as a financial indicator may open up new insights to understand the collective and even individual behavior of global financial indices.
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
NASA Technical Reports Server (NTRS)
Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.;
2003-01-01
This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
How Fast Can Networks Synchronize? A Random Matrix Theory Approach
NASA Astrophysics Data System (ADS)
Timme, Marc; Wolf, Fred; Geisel, Theo
2004-03-01
Pulse-coupled oscillators constitute a paradigmatic class of dynamical systems interacting on networks because they model a variety of biological systems including flashing fireflies and chirping crickets as well as pacemaker cells of the heart and neural networks. Synchronization is one of the most simple and most prevailing kinds of collective dynamics on such networks. Here we study collective synchronization [1] of pulse-coupled oscillators interacting on asymmetric random networks. Using random matrix theory we analytically determine the speed of synchronization in such networks in dependence on the dynamical and network parameters [2]. The speed of synchronization increases with increasing coupling strengths. Surprisingly, however, it stays finite even for infinitely strong interactions. The results indicate that the speed of synchronization is limited by the connectivity of the network. We discuss the relevance of our findings to general equilibration processes on complex networks. [5mm] [1] M. Timme, F. Wolf, T. Geisel, Phys. Rev. Lett. 89:258701 (2002). [2] M. Timme, F. Wolf, T. Geisel, cond-mat/0306512 (2003).
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.
Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José
2016-08-01
Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
NASA Astrophysics Data System (ADS)
Takano, Y.; Liou, K. N.; Kahnert, M.; Yang, P.
2013-08-01
The single-scattering properties of eight black carbon (BC, soot) fractal aggregates, composed of primary spheres from 7 to 600, computed by the geometric-optics surface-wave (GOS) approach coupled with the Rayleigh-Gans-Debye (RGD) adjustment for size parameters smaller than approximately 2, are compared with those determined from the superposition T-matrix method. We show that under the condition of random orientation, the results from GOS/RGD are in general agreement with those from T-matrix in terms of the extinction and absorption cross-sections, the single-scattering co-albedo, and the asymmetry factor. When compared with the specific absorption (m2/g) measured in the laboratory, we illustrate that using the observed radii of primary spheres ranging from 3.3 to 25 nm, the theoretical values determined from GOS/RGD for primary sphere numbers of 100-600 are within the range of measured values. The GOS approach can be effectively applied to aggregates composed of a large number of primary spheres (e.g., >6000) and large size parameters (≫2) in terms of computational efforts.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Antoniuk, Oleg; Sprik, Rudolf
2010-03-01
We developed a random matrix model to describe the statistics of resonances in an acoustic cavity with broken time-reversal invariance. Time-reversal invariance braking is achieved by connecting an amplified feedback loop between two transducers on the surface of the cavity. The model is based on approach [1] that describes time- reversal properties of the cavity without a feedback loop. Statistics of eigenvalues (nearest neighbor resonance spacing distributions and spectral rigidity) has been calculated and compared to the statistics obtained from our experimental data. Experiments have been performed on aluminum block of chaotic shape confining ultrasound waves. [1] Carsten Draeger and Mathias Fink, One-channel time- reversal in chaotic cavities: Theoretical limits, Journal of Acoustical Society of America, vol. 105, Nr. 2, pp. 611-617 (1999)
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
Random matrix ensembles for many-body quantum systems
NASA Astrophysics Data System (ADS)
Vyas, Manan; Seligman, Thomas H.
2018-04-01
Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
NASA Astrophysics Data System (ADS)
Ahmadi, Masoud; Ansari, Reza; Rouhi, Saeed
2017-11-01
This paper aims to investigate the elastic modulus of the polypropylene matrix reinforced by carbon nanotubes at different temperatures. To this end, the finite element approach is employed. The nanotubes with different volume fractions and aspect ratios (the ratio of length to diameter) are embedded in the polymer matrix. Besides, random and regular algorithms are utilized to disperse carbon nanotubes in the matrix. It is seen that as the pure polypropylene, the elastic modulus of carbon nanotube reinforced polypropylene decreases by increasing the temperature. It is also observed that when the carbon nanotubes are dispersed parallelly and the load is applied along the nanotube directions, the largest improvement in the elastic modulus of the nanotube/polypropylene nanocomposites is obtained.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Stratt, Richard M.
2018-05-01
Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Robust reliable sampled-data control for switched systems with application to flight control
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.
2016-11-01
This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.
Effective Perron-Frobenius eigenvalue for a correlated random map
NASA Astrophysics Data System (ADS)
Pool, Roman R.; Cáceres, Manuel O.
2010-09-01
We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.
Bayes linear covariance matrix adjustment
NASA Astrophysics Data System (ADS)
Wilkinson, Darren J.
1995-12-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.
NASA Astrophysics Data System (ADS)
Fang, Dong-Liang; Faessler, Amand; Simkovic, Fedor
2015-10-01
In this work, we calculate the matrix elements for the 0 ν β β decay of 150Nd using the deformed quasiparticle random-phase approximation (p n -QRPA) method. We adopted the approach introduced by Rodin and Faessler [Phys. Rev. C 84, 014322 (2011), 10.1103/PhysRevC.84.014322] and Simkovic et al. [Phys. Rev. C 87, 045501 (2013), 10.1103/PhysRevC.87.045501] to restore the isospin symmetry by enforcing MF2 ν=0 . We found that with this restoration, the Fermi matrix elements are reduced in the strongly deformed 150Nd by about 15 to 20%, while the more important Gamow-Teller matrix elements remain the same. The results of an enlarged model space are also presented. This enlargement increases the total (Fermi plus Gamow-Teller) matrix elements by less than 10%.
Discrete-time systems with random switches: From systems stability to networks synchronization.
Guo, Yao; Lin, Wei; Ho, Daniel W C
2016-03-01
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.
Discrete-time systems with random switches: From systems stability to networks synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433
2016-03-15
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less
NASA Astrophysics Data System (ADS)
Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.
2018-06-01
The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.
Random Matrix Approach to Quantum Adiabatic Evolution Algorithms
NASA Technical Reports Server (NTRS)
Boulatov, Alexei; Smelyanskiy, Vadier N.
2004-01-01
We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.
Micromechanics and effective elastoplastic behavior of two-phase metal matrix composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, J.W.; Chen, T.M.
A micromechanical framework is presented to predict effective (overall) elasto-(visco-)plastic behavior of two-phase particle-reinforced metal matrix composites (PRMMC). In particular, the inclusion phase (particle) is assumed to be elastic and the matrix material is elasto-(visco-)plastic. Emanating from Ju and Chen's (1994a,b) work on effective elastic properties of composites containing many randomly dispersed inhomogeneities, effective elastoplastic deformations and responses of PRMMC are estimated by means of the effective yield criterion'' derived micromechanically by considering effects due to elastic particles embedded in the elastoplastic matrix. The matrix material is elastic or plastic, depending on local stress and deformation, and obeys general plasticmore » flow rule and hardening law. Arbitrary (general) loadings and unloadings are permitted in the framework through the elastic predictor-plastic corrector two-step operator splitting methodology. The proposed combined micromechanical and computational approach allows one to estimate overall elastoplastic responses of PRMMCs by accounting for the microstructural information (such as the spatial distribution and micro-geometry of particles), elastic properties of constituent phases, and the plastic behavior of the matrix-only materials.« less
Staggered chiral random matrix theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Sinha, A. K.
1973-01-01
Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.
Unifying model for random matrix theory in arbitrary space dimensions
NASA Astrophysics Data System (ADS)
Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio
2018-03-01
A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Gong, Feng; Duong, Hai M.; Papavassiliou, Dimitrios V.
2016-01-01
Here, we present a review of recent developments for an off-lattice Monte Carlo approach used to investigate the thermal transport properties of multiphase composites with complex structure. The thermal energy was quantified by a large number of randomly moving thermal walkers. Different modes of heat conduction were modeled in appropriate ways. The diffusive heat conduction in the polymer matrix was modeled with random Brownian motion of thermal walkers within the polymer, and the ballistic heat transfer within the carbon nanotubes (CNTs) was modeled by assigning infinite speed of thermal walkers in the CNTs. Three case studies were conducted to validate the developed approach, including three-phase single-walled CNTs/tungsten disulfide (WS2)/(poly(ether ether ketone) (PEEK) composites, single-walled CNT/WS2/PEEK composites with the CNTs clustered in bundles, and complex graphene/poly(methyl methacrylate) (PMMA) composites. In all cases, resistance to heat transfer due to nanoscale phenomena was also modeled. By quantitatively studying the influencing factors on the thermal transport properties of the multiphase composites, it was found that the orientation, aggregation and morphology of fillers, as well as the interfacial thermal resistance at filler-matrix interfaces would limit the transfer of heat in the composites. These quantitative findings may be applied in the design and synthesis of multiphase composites with specific thermal transport properties. PMID:28335270
A computational proposal for designing structured RNA pools for in vitro selection of RNAs.
Kim, Namhee; Gan, Hin Hark; Schlick, Tamar
2007-04-01
Although in vitro selection technology is a versatile experimental tool for discovering novel synthetic RNA molecules, finding complex RNA molecules is difficult because most RNAs identified from random sequence pools are simple motifs, consistent with recent computational analysis of such sequence pools. Thus, enriching in vitro selection pools with complex structures could increase the probability of discovering novel RNAs. Here we develop an approach for engineering sequence pools that links RNA sequence space regions with corresponding structural distributions via a "mixing matrix" approach combined with a graph theory analysis. We define five classes of mixing matrices motivated by covariance mutations in RNA; these constructs define nucleotide transition rates and are applied to chosen starting sequences to yield specific nonrandom pools. We examine the coverage of sequence space as a function of the mixing matrix and starting sequence via clustering analysis. We show that, in contrast to random sequences, which are associated only with a local region of sequence space, our designed pools, including a structured pool for GTP aptamers, can target specific motifs. It follows that experimental synthesis of designed pools can benefit from using optimized starting sequences, mixing matrices, and pool fractions associated with each of our constructed pools as a guide. Automation of our approach could provide practical tools for pool design applications for in vitro selection of RNAs and related problems.
The open quantum Brownian motions
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2014-09-01
Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
NASA Astrophysics Data System (ADS)
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
Multilevel covariance regression with correlated random effects in the mean and variance structure.
Quintero, Adrian; Lesaffre, Emmanuel
2017-09-01
Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Many-body-localization: strong disorder perturbative approach for the local integrals of motion
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-05-01
For random quantum spin models, the strong disorder perturbative expansion of the local integrals of motion around the real-spin operators is revisited. The emphasis is on the links with other properties of the many-body-localized phase, in particular the memory in the dynamics of the local magnetizations and the statistics of matrix elements of local operators in the eigenstate basis. Finally, this approach is applied to analyze the many-body-localization transition in a toy model studied previously from the point of view of the entanglement entropy.
Implementation of a finite-amplitude method in a relativistic meson-exchange model
NASA Astrophysics Data System (ADS)
Sun, Xuwei; Lu, Dinghui
2017-08-01
The finite-amplitude method is a feasible numerical approach to large scale random phase approximation calculations. It avoids the storage and calculation of residual interaction elements as well as the diagonalization of the RPA matrix, which will be prohibitive when the configuration space is huge. In this work we finished the implementation of a finite-amplitude method in a relativistic meson exchange mean field model with axial symmetry. The direct variation approach makes our FAM scheme capable of being extended to the multipole excitation case.
[Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].
Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan
2016-03-01
To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.
Spectrum of walk matrix for Koch network and its application
NASA Astrophysics Data System (ADS)
Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi
2015-06-01
Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.
Risk analytics for hedge funds
NASA Astrophysics Data System (ADS)
Pareek, Ankur
2005-05-01
The rapid growth of the hedge fund industry presents significant business opportunity for the institutional investors particularly in the form of portfolio diversification. To facilitate this, there is a need to develop a new set of risk analytics for investments consisting of hedge funds, with the ultimate aim to create transparency in risk measurement without compromising the proprietary investment strategies of hedge funds. As well documented in the literature, use of dynamic options like strategies by most of the hedge funds make their returns highly non-normal with fat tails and high kurtosis, thus rendering Value at Risk (VaR) and other mean-variance analysis methods unsuitable for hedge fund risk quantification. This paper looks at some unique concerns for hedge fund risk management and will particularly concentrate on two approaches from physical world to model the non-linearities and dynamic correlations in hedge fund portfolio returns: Self Organizing Criticality (SOC) and Random Matrix Theory (RMT).Random Matrix Theory analyzes correlation matrix between different hedge fund styles and filters random noise from genuine correlations arising from interactions within the system. As seen in the results of portfolio risk analysis, it leads to a better portfolio risk forecastability and thus to optimum allocation of resources to different hedge fund styles. The results also prove the efficacy of self-organized criticality and implied portfolio correlation as a tool for risk management and style selection for portfolios of hedge funds, being particularly effective during non-linear market crashes.
Analysis of world terror networks from the reduced Google matrix of Wikipedia
NASA Astrophysics Data System (ADS)
El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.
2018-01-01
We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Random matrix theory filters in portfolio optimisation: A stability and risk assessment
NASA Astrophysics Data System (ADS)
Daly, J.; Crane, M.; Ruskin, H. J.
2008-07-01
Random matrix theory (RMT) filters, applied to covariance matrices of financial returns, have recently been shown to offer improvements to the optimisation of stock portfolios. This paper studies the effect of three RMT filters on the realised portfolio risk, and on the stability of the filtered covariance matrix, using bootstrap analysis and out-of-sample testing. We propose an extension to an existing RMT filter, (based on Krzanowski stability), which is observed to reduce risk and increase stability, when compared to other RMT filters tested. We also study a scheme for filtering the covariance matrix directly, as opposed to the standard method of filtering correlation, where the latter is found to lower the realised risk, on average, by up to 6.7%. We consider both equally and exponentially weighted covariance matrices in our analysis, and observe that the overall best method out-of-sample was that of the exponentially weighted covariance, with our Krzanowski stability-based filter applied to the correlation matrix. We also find that the optimal out-of-sample decay factors, for both filtered and unfiltered forecasts, were higher than those suggested by Riskmetrics [J.P. Morgan, Reuters, Riskmetrics technical document, Technical Report, 1996. http://www.riskmetrics.com/techdoc.html], with those for the latter approaching a value of α=1. In conclusion, RMT filtering reduced the realised risk, on average, and in the majority of cases when tested out-of-sample, but increased the realised risk on a marked number of individual days-in some cases more than doubling it.
A transfer matrix approach to vibration localization in mistuned blade assemblies
NASA Technical Reports Server (NTRS)
Ottarson, Gisli; Pierre, Chritophe
1993-01-01
A study of mode localization in mistuned bladed disks is performed using transfer matrices. The transfer matrix approach yields the free response of a general, mono-coupled, perfectly cyclic assembly in closed form. A mistuned structure is represented by random transfer matrices, and the expansion of these matrices in terms of the small mistuning parameter leads to the definition of a measure of sensitivity to mistuning. An approximation of the localization factor, the spatially averaged rate of exponential attenuation per blade-disk sector, is obtained through perturbation techniques in the limits of high and low sensitivity. The methodology is applied to a common model of a bladed disk and the results verified by Monte Carlo simulations. The easily calculated sensitivity measure may prove to be a valuable design tool due to its system-independent quantification of mistuning effects such as mode localization.
NASA Astrophysics Data System (ADS)
Marchal, O.; Cafasso, M.
2011-04-01
In this paper, we show that the double-scaling-limit correlation functions of a random matrix model when two cuts merge with degeneracy 2m (i.e. when y ~ x2m for arbitrary values of the integer m) are the same as the determinantal formulae defined by conformal (2m, 1) models. Our approach follows the one developed by Bergère and Eynard in (2009 arXiv:0909.0854) and uses a Lax pair representation of the conformal (2m, 1) models (giving a Painlevé II integrable hierarchy) as suggested by Bleher and Eynard in (2003 J. Phys. A: Math. Gen. 36 3085). In particular we define Baker-Akhiezer functions associated with the Lax pair in order to construct a kernel which is then used to compute determinantal formulae giving the correlation functions of the double-scaling limit of a matrix model near the merging of two cuts.
Random Matrix Theory and the Anderson Model
NASA Astrophysics Data System (ADS)
Bellissard, Jean
2004-08-01
This paper is devoted to a discussion of possible strategies to prove rigorously the existence of a metal-insulator Anderson transition for the Anderson model in dimension d≥3. The possible criterions used to define such a transition are presented. It is argued that at low disorder the lowest order in perturbation theory is described by a random matrix model. Various simplified versions for which rigorous results have been obtained in the past are discussed. It includes a free probability approach, the Wegner n-orbital model and a class of models proposed by Disertori, Pinson, and Spencer, Comm. Math. Phys. 232:83-124 (2002). At last a recent work by Magnen, Rivasseau, and the author, Markov Process and Related Fields 9:261-278 (2003) is summarized: it gives a toy modeldescribing the lowest order approximation of Anderson model and it is proved that, for d=2, its density of states is given by the semicircle distribution. A short discussion of its extension to d≥3 follows.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Gravitational lensing by eigenvalue distributions of random matrix models
NASA Astrophysics Data System (ADS)
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Jia, Lin; Prabhakaran, Molamma P; Qin, Xiaohong; Ramakrishna, Seeram
2014-09-01
Fabricating scaffolds that can simulate the architecture and functionality of native extracellular matrix is a huge challenge in vascular tissue engineering. Various kinds of materials are engineered via nano-technological approaches to meet the current challenges in vascular tissue regeneration. During this study, nanofibers from pure polyurethane and hybrid polyurethane/collagen in two different morphologies (random and aligned) and in three different ratios of polyurethane:collagen (75:25; 50:50; 25:75) are fabricated by electrospinning. The fiber diameters of the nanofibrous scaffolds are in the range of 174-453 nm and 145-419 for random and aligned fibers, respectively, where they closely mimic the nanoscale dimensions of native extracellular matrix. The aligned polyurethane/collagen nanofibers expressed anisotropic wettability with mechanical properties which is suitable for regeneration of the artery. After 12 days of human aortic smooth muscle cells culture on different scaffolds, the proliferation of smooth muscle cells on hybrid polyurethane/collagen (3:1) nanofibers was 173% and 212% higher than on pure polyurethane scaffolds for random and aligned scaffolds, respectively. The results of cell morphology and protein staining showed that the aligned polyurethane/collagen (3:1) scaffold promote smooth muscle cells alignment through contact guidance, while the random polyurethane/collagen (3:1) also guided cell orientation most probably due to the inherent biochemical composition. Our studies demonstrate the potential of aligned and random polyurethane/collagen (3:1) as promising substrates for vascular tissue regeneration. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Random matrices and the New York City subway system
NASA Astrophysics Data System (ADS)
Jagannath, Aukosh; Trogdon, Thomas
2017-09-01
We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Universality for 1d Random Band Matrices: Sigma-Model Approximation
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
Disentangling giant component and finite cluster contributions in sparse random matrix spectra.
Kühn, Reimer
2016-04-01
We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.
Pulsed field gradients in simulations of one- and two-dimensional NMR spectra.
Meresi, G H; Cuperlovic, M; Palke, W E; Gerig, J T
1999-03-01
A method for the inclusion of the effects of z-axis pulsed field gradients in computer simulations of an arbitrary pulsed NMR experiment with spin (1/2) nuclei is described. Recognizing that the phase acquired by a coherence following the application of a z-axis pulsed field gradient bears a fixed relation to its order and the spatial position of the spins in the sample tube, the sample is regarded as a collection of volume elements, each phase-encoded by a characteristic, spatially dependent precession frequency. The evolution of the sample's density matrix is thus obtained by computing the evolution of the density matrix for each volume element. Following the last gradient pulse, these density matrices are combined to form a composite density matrix which evolves through the rest of the experiment to yield the observable signal. This approach is implemented in a program which includes capabilities for rigorous inclusion of spin relaxation by dipole-dipole, chemical shift anisotropy, and random field mechanisms, plus the effects of arbitrary RF fields. Mathematical procedures for accelerating these calculations are described. The approach is illustrated by simulations of representative one- and two-dimensional NMR experiments. Copyright 1999 Academic Press.
Agarwal, Jayant P; Mendenhall, Shaun D; Anderson, Layla A; Ying, Jian; Boucher, Kenneth M; Liu, Ting; Neumayer, Leigh A
2015-01-01
Recent literature has focused on the advantages and disadvantages of using acellular dermal matrix in breast reconstruction. Many of the reported data are from low level-of-evidence studies, leaving many questions incompletely answered. The present randomized trial provides high-level data on the incidence and severity of complications in acellular dermal matrix breast reconstruction between two commonly used types of acellular dermal matrix. A prospective randomized trial was conducted to compare outcomes of immediate staged tissue expander breast reconstruction using either AlloDerm or DermaMatrix. The impact of body mass index, smoking, diabetes, mastectomy type, radiation therapy, and chemotherapy on outcomes was analyzed. Acellular dermal matrix biointegration was analyzed clinically and histologically. Patient satisfaction was assessed by means of preoperative and postoperative surveys. Logistic regression models were used to identify predictors of complications. This article reports on the study design, surgical technique, patient characteristics, and preoperative survey results, with outcomes data in a separate report. After 2.5 years, we successfully enrolled and randomized 128 patients (199 breasts). The majority of patients were healthy nonsmokers, with 41 percent of patients receiving radiation therapy and 49 percent receiving chemotherapy. Half of the mastectomies were prophylactic, with nipple-sparing mastectomy common in both cancer and prophylactic cases. Preoperative survey results indicate that patients were satisfied with their premastectomy breast reconstruction education. Results from the Breast Reconstruction Evaluation Using Acellular Dermal Matrix as a Sling Trial will assist plastic surgeons in making evidence-based decisions regarding acellular dermal matrix-assisted tissue expander breast reconstruction. Therapeutic, II.
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
Quantitative tissue polarimetry using polar decomposition of 3 x 3 Mueller matrix
NASA Astrophysics Data System (ADS)
Swami, M. K.; Manhas, S.; Buddhiwant, P.; Ghosh, N.; Uppal, A.; Gupta, P. K.
2007-05-01
Polarization properties of any optical system are completely described by a sixteen-element (4 x 4) matrix called Mueller matrix, which transform the Stokes vector describing the polarization properties of incident light to the stokes vector of scattered light. Measurement of all the elements of the matrix requires a minimum of sixteen measurements involving both linear and circularly polarized light. However, for many diagnostic applications, it would be useful if all the polarization parameters of the medium (depolarization (Δ), differential attenuation of two orthogonal polarizations, that is, diattenuation (d), and differential phase retardance of two orthogonal polarizations, i.e., retardance (δ )) can be quantified with linear polarization measurements alone. In this paper we show that for a turbid medium, like biological tissue, where the depolarization of linearly polarized light arises primarily due to the randomization of the field vector's direction by multiple scattering, the polarization parameters of the medium can be obtained from the nine Mueller matrix elements involving linear polarization measurements only. Use of the approach for measurement of polarization parameters (Δ, d and δ) of normal and malignant (squamous cell carcinoma) tissues resected from human oral cavity are presented.
NASA Astrophysics Data System (ADS)
Siegel, Z.; Siegel, Edward Carl-Ludwig
2011-03-01
RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
Universality in chaos: Lyapunov spectrum and random matrix theory.
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Universality in chaos: Lyapunov spectrum and random matrix theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Das, Kiranmoy; Daniels, Michael J.
2014-01-01
Summary Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium and low) at baseline. PMID:24400941
NASA Astrophysics Data System (ADS)
Han, Tongcheng
2018-07-01
Understanding the electrical properties of rocks under varying pressure is important for a variety of geophysical applications. This study proposes an approach to modelling the pressure-dependent electrical properties of porous rocks based on an effective medium model. The so-named Textural model uses the aspect ratios and pressure-dependent volume fractions of the pores and the aspect ratio and electrical conductivity of the matrix grains. The pores were represented by randomly oriented stiff and compliant spheroidal shapes with constant aspect ratios, and their pressure-dependent volume fractions were inverted from the measured variation of total porosity with differential pressure using a dual porosity model. The unknown constant stiff and compliant pore aspect ratios and the aspect ratio and electrical conductivity of the matrix grains were inverted by best fitting the modelled electrical formation factor to the measured data. Application of the approach to three sandstone samples covering a broad porosity range showed that the pressure-dependent electrical properties can be satisfactorily modelled by the proposed approach. The results demonstrate that the dual porosity concept is sufficient to explain the electrical properties of porous rocks under pressure through the effective medium model scheme.
Social patterns revealed through random matrix theory
NASA Astrophysics Data System (ADS)
Sarkar, Camellia; Jalan, Sarika
2014-11-01
Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2017-09-01
A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.
Zhang, Du; Su, Neil Qiang; Yang, Weitao
2017-07-20
The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
0{nu}{beta}{beta}-decay nuclear matrix elements with self-consistent short-range correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simkovic, Fedor; Bogoliubov Laboratory of Theoretical Physics, JINR, RU-141 980 Dubna, Moscow region; Department of Nuclear Physics, Comenius University, Mlynska dolina F1, SK-842 15 Bratislava
A self-consistent calculation of nuclear matrix elements of the neutrinoless double-beta decays (0{nu}{beta}{beta}) of {sup 76}Ge, {sup 82}Se, {sup 96}Zr, {sup 100}Mo, {sup 116}Cd, {sup 128}Te, {sup 130}Te, and {sup 136}Xe is presented in the framework of the renormalized quasiparticle random phase approximation (RQRPA) and the standard QRPA. The pairing and residual interactions as well as the two-nucleon short-range correlations are for the first time derived from the same modern realistic nucleon-nucleon potentials, namely, from the charge-dependent Bonn potential (CD-Bonn) and the Argonne V18 potential. In a comparison with the traditional approach of using the Miller-Spencer Jastrow correlations, matrix elementsmore » for the 0{nu}{beta}{beta} decay are obtained that are larger in magnitude. We analyze the differences among various two-nucleon correlations including those of the unitary correlation operator method (UCOM) and quantify the uncertainties in the calculated 0{nu}{beta}{beta}-decay matrix elements.« less
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization.
Zhao, Zhi-Qin; Han, Guo-Sheng; Yu, Zu-Guo; Li, Jinyan
2015-08-01
Random walk on heterogeneous networks is a recently emerging approach to effective disease gene prioritization. Laplacian normalization is a technique capable of normalizing the weight of edges in a network. We use this technique to normalize the gene matrix and the phenotype matrix before the construction of the heterogeneous network, and also use this idea to define the transition matrices of the heterogeneous network. Our method has remarkably better performance than the existing methods for recovering known gene-phenotype relationships. The Shannon information entropy of the distribution of the transition probabilities in our networks is found to be smaller than the networks constructed by the existing methods, implying that a higher number of top-ranked genes can be verified as disease genes. In fact, the most probable gene-phenotype relationships ranked within top 3 or top 5 in our gene lists can be confirmed by the OMIM database for many cases. Our algorithms have shown remarkably superior performance over the state-of-the-art algorithms for recovering gene-phenotype relationships. All Matlab codes can be available upon email request. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2009-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Reliable gain-scheduled control of discrete-time systems and its application to CSTR model
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.
2016-10-01
This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.
NASA Astrophysics Data System (ADS)
Lee, Gibbeum; Cho, Yeunwoo
2018-01-01
A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.
Multiple Scattering of Waves in Discrete Random Media.
1987-12-31
expanding the two body correlation functions in Legendre polynomials. This permits us to consider the angular correlations that exist for non-spherical...a scat- of the translation matrix after the angular and radial parts have terer fixed at it. been absorbed in the integration. Expressions for them...Approach New York: Pergamon Press. 1980 ’" close to the actual values for FeO, in isolation since they 171 A R. Edmonds. Angular Momentum in Quantum . h(pa
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun
2018-03-01
In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.
Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.
Su, Yu-Xuan; Tu, Yu-Kang
2018-05-22
Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.
2016-12-21
In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.
Random walks with long-range steps generated by functions of Laplacian matrices
NASA Astrophysics Data System (ADS)
Riascos, A. P.; Michelitsch, T. M.; Collet, B. A.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2018-04-01
In this paper, we explore different Markovian random walk strategies on networks with transition probabilities between nodes defined in terms of functions of the Laplacian matrix. We generalize random walk strategies with local information in the Laplacian matrix, that describes the connections of a network, to a dynamic determined by functions of this matrix. The resulting processes are non-local allowing transitions of the random walker from one node to nodes beyond its nearest neighbors. We find that only two types of Laplacian functions are admissible with distinct behaviors for long-range steps in the infinite network limit: type (i) functions generate Brownian motions, type (ii) functions Lévy flights. For this asymptotic long-range step behavior only the lowest non-vanishing order of the Laplacian function is relevant, namely first order for type (i), and fractional order for type (ii) functions. In the first part, we discuss spectral properties of the Laplacian matrix and a series of relations that are maintained by a particular type of functions that allow to define random walks on any type of undirected connected networks. Once described general properties, we explore characteristics of random walk strategies that emerge from particular cases with functions defined in terms of exponentials, logarithms and powers of the Laplacian as well as relations of these dynamics with non-local strategies like Lévy flights and fractional transport. Finally, we analyze the global capacity of these random walk strategies to explore networks like lattices and trees and different types of random and complex networks.
System matrix computation vs storage on GPU: A comparative study in cone beam CT.
Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2018-02-01
Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Logistic regression of family data from retrospective study designs.
Whittemore, Alice S; Halpern, Jerry
2003-11-01
We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.
Prefrontal gray matter volume predicts metacognitive accuracy following traumatic brain injury.
Grossner, Emily C; Bernier, Rachel A; Brenner, Einat K; Chiou, Kathy S; Hillary, Frank G
2018-05-01
To examine metacognitive ability (MC) following moderate to severe traumatic brain injury (TBI) using an empirical assessment approach and to determine the relationship between alterations in gray matter volume (GMV) and MC. A sample of 62 individuals (TBI n = 34; healthy control [HC] n = 28) were included in the study. Neuroimaging and neuropsychological data were collected for all participants during the same visit. MC was quantified using an approach borrowed from signal detection theory (Type II area under the receiver operating characteristic curve calculation) to evaluate judgments during a modified version of the 3rd edition of the Wechsler Adult Intelligence Scale's Matrix Reasoning subtest where half of the items were presented randomly and half were presented in the order of increasing difficulty. Retrospective confidence judgments were collected on an item-by-item basis. Brain volumetric analyses were conducted using FreeSurfer software. Analyses of the modified Matrix Reasoning task data demonstrated that HCs significantly outperformed TBIs (ordered: d = .63; random: d = .58). There was a significant difference between groups for MC for the randomly presented stimuli (d = .54) but not the ordered stimuli. There was an association between GMV and MC in the TBI group between the right orbital region and MC (R2 = .11). In the HC group, there were associations between the left posterior (R2 = .17), left orbital (R2 = .29), and left dorsolateral (R2 = .21) regions and MC. These results are consistent with those of previous research on MC in the cognitive neurosciences, but this study demonstrates that injury may moderate the regional contributions to MC. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
NASA Astrophysics Data System (ADS)
Gudder, Stanley
2008-07-01
A new approach to quantum Markov chains is presented. We first define a transition operation matrix (TOM) as a matrix whose entries are completely positive maps whose column sums form a quantum operation. A quantum Markov chain is defined to be a pair (G,E) where G is a directed graph and E =[Eij] is a TOM whose entry Eij labels the edge from vertex j to vertex i. We think of the vertices of G as sites that a quantum system can occupy and Eij is the transition operation from site j to site i in one time step. The discrete dynamics of the system is obtained by iterating the TOM E. We next consider a special type of TOM called a transition effect matrix. In this case, there are two types of dynamics, a state dynamics and an operator dynamics. Although these two types are not identical, they are statistically equivalent. We next give examples that illustrate various properties of quantum Markov chains. We conclude by showing that our formalism generalizes the usual framework for quantum random walks.
Universality and Thouless energy in the supersymmetric Sachdev-Ye-Kitaev model
NASA Astrophysics Data System (ADS)
García-García, Antonio M.; Jia, Yiyang; Verbaarschot, Jacobus J. M.
2018-05-01
We investigate the supersymmetric Sachdev-Ye-Kitaev (SYK) model, N Majorana fermions with infinite range interactions in 0 +1 dimensions. We have found that, close to the ground state E ≈0 , discrete symmetries alter qualitatively the spectral properties with respect to the non-supersymmetric SYK model. The average spectral density at finite N , which we compute analytically and numerically, grows exponentially with N for E ≈0 . However the chiral condensate, which is normalized with respect the total number of eigenvalues, vanishes in the thermodynamic limit. Slightly above E ≈0 , the spectral density grows exponentially with the energy. Deep in the quantum regime, corresponding to the first O (N ) eigenvalues, the average spectral density is universal and well described by random matrix ensembles with chiral and superconducting discrete symmetries. The dynamics for E ≈0 is investigated by level fluctuations. Also in this case we find excellent agreement with the prediction of chiral and superconducting random matrix ensembles for eigenvalue separations smaller than the Thouless energy, which seems to scale linearly with N . Deviations beyond the Thouless energy, which describes how ergodicity is approached, are universally characterized by a quadratic growth of the number variance. In the time domain, we have found analytically that the spectral form factor g (t ), obtained from the connected two-level correlation function of the unfolded spectrum, decays as 1 /t2 for times shorter but comparable to the Thouless time with g (0 ) related to the coefficient of the quadratic growth of the number variance. Our results provide further support that quantum black holes are ergodic and therefore can be classified by random matrix theory.
Targeting functional motifs of a protein family
NASA Astrophysics Data System (ADS)
Bhadola, Pradeep; Deo, Nivedita
2016-10-01
The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.
On the equilibrium state of a small system with random matrix coupling to its environment
NASA Astrophysics Data System (ADS)
Lebowitz, J. L.; Pastur, L.
2015-07-01
We consider a random matrix model of interaction between a small n-level system, S, and its environment, a N-level heat reservoir, R. The interaction between S and R is modeled by a tensor product of a fixed n× n matrix and a N× N Hermitian random matrix. We show that under certain ‘macroscopicity’ conditions on R, the reduced density matrix of the system {{ρ }S}=T{{r}R}ρ S\\cup R(eq), is given by ρ S(c)˜ exp \\{-β {{H}S}\\}, where HS is the Hamiltonian of the isolated system. This holds for all strengths of the interaction and thus gives some justification for using ρ S(c) to describe some nano-systems, like biopolymers, in equilibrium with their environment (Seifert 2012 Rep. Prog. Phys. 75 126001). Our results extend those obtained previously in (Lebowitz and Pastur 2004 J. Phys. A: Math. Gen. 37 1517-34) (Lebowitz et al 2007 Contemporary Mathematics (Providence RI: American Mathematical Society) pp 199-218) for a special two-level system.
NASA Technical Reports Server (NTRS)
Kim, H. Alicia; Hardie, Robert; Yamakov, Vesselin; Park, Cheol
2015-01-01
This paper is the second part of a two-part series where the first part presents a molecular dynamics model of a single Boron Nitride Nanotube (BNNT) and this paper scales up to multiple BNNTs in a polymer matrix. This paper presents finite element (FE) models to investigate the effective elastic and piezoelectric properties of (BNNT) nanocomposites. The nanocomposites studied in this paper are thin films of polymer matrix with aligned co-planar BNNTs. The FE modelling approach provides a computationally efficient way to gain an understanding of the material properties. We examine several FE models to identify the most suitable models and investigate the effective properties with respect to the BNNT volume fraction and the number of nanotube walls. The FE models are constructed to represent aligned and randomly distributed BNNTs in a matrix of resin using 2D and 3D hollow and 3D filled cylinders. The homogenisation approach is employed to determine the overall elastic and piezoelectric constants for a range of volume fractions. These models are compared with an analytical model based on Mori-Tanaka formulation suitable for finite length cylindrical inclusions. The model applies to primarily single-wall BNNTs but is also extended to multi-wall BNNTs, for which preliminary results will be presented. Results from the Part 1 of this series can help to establish a constitutive relationship for input into the finite element model to enable the modeling of multiple BNNTs in a polymer matrix.
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
Two-component Structure in the Entanglement Spectrum of Highly Excited States
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo
We study the entanglement spectrum of highly excited eigenstates of two known models which exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a ``two-component'' structure: a universal part that is associated to Random Matrix Theory, and a non-universal part that is model dependent. The non-universal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the Eigenstate Thermalization Hypothesis holds. The fraction of the spectrum containing the universal part decreases continuously as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct a new order parameter for the many-body delocalized-to-localized transition. Two toy models based on Rokhsar-Kivelson type wavefunctions are constructed and their entanglement spectra are shown to exhibit the same structure.
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Oskouyi, Amirhossein Biabangard; Sundararaj, Uttandaraman; Mertiny, Pierre
2014-01-01
In this study, a three-dimensional continuum percolation model was developed based on a Monte Carlo simulation approach to investigate the percolation behavior of an electrically insulating matrix reinforced with conductive nano-platelet fillers. The conductivity behavior of composites rendered conductive by randomly dispersed conductive platelets was modeled by developing a three-dimensional finite element resistor network. Parameters related to the percolation threshold and a power-low describing the conductivity behavior were determined. The piezoresistivity behavior of conductive composites was studied employing a reoriented resistor network emulating a conductive composite subjected to mechanical strain. The effects of the governing parameters, i.e., electron tunneling distance, conductive particle aspect ratio and size effects on conductivity behavior were examined. PMID:28788580
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Two-Component Structure in the Entanglement Spectrum of Highly Excited States
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo R.
2015-12-01
We study the entanglement spectrum of highly excited eigenstates of two known models that exhibit a many-body localization transition, namely the one-dimensional random-field Heisenberg model and the quantum random energy model. Our results indicate that the entanglement spectrum shows a "two-component" structure: a universal part that is associated with random matrix theory, and a nonuniversal part that is model dependent. The nonuniversal part manifests the deviation of the highly excited eigenstate from a true random state even in the thermalized phase where the eigenstate thermalization hypothesis holds. The fraction of the spectrum containing the universal part decreases as one approaches the critical point and vanishes in the localized phase in the thermodynamic limit. We use the universal part fraction to construct an order parameter for measuring the degree of randomness of a generic highly excited state, which is also a promising candidate for studying the many-body localization transition. Two toy models based on Rokhsar-Kivelson type wave functions are constructed and their entanglement spectra are shown to exhibit the same structure.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
Eigenvalue density of cross-correlations in Sri Lankan financial market
NASA Astrophysics Data System (ADS)
Nilantha, K. G. D. R.; Ranasinghe; Malmini, P. K. C.
2007-05-01
We apply the universal properties with Gaussian orthogonal ensemble (GOE) of random matrices namely spectral properties, distribution of eigenvalues, eigenvalue spacing predicted by random matrix theory (RMT) to compare cross-correlation matrix estimators from emerging market data. The daily stock prices of the Sri Lankan All share price index and Milanka price index from August 2004 to March 2005 were analyzed. Most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree with the universal predictions of RMT. We find that the cross-correlation matrix satisfies the universal properties of the GOE of real symmetric random matrices. The eigen distribution follows the RMT predictions in the bulk but there are some deviations at the large eigenvalues. The nearest-neighbor spacing and the next nearest-neighbor spacing of the eigenvalues were examined and found that they follow the universality of GOE. RMT with deterministic correlations found that each eigenvalue from deterministic correlations is observed at values, which are repelled from the bulk distribution.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Haynor, David R.; Thompson, Carol L.; Lein, Ed; Hawrylycz, Michael
2009-02-01
Understanding the geography of genetic expression in the mouse brain has opened previously unexplored avenues in neuroinformatics. The Allen Brain Atlas (www.brain-map.org) (ABA) provides genome-wide colorimetric in situ hybridization (ISH) gene expression images at high spatial resolution, all mapped to a common three-dimensional 200μm3 spatial framework defined by the Allen Reference Atlas (ARA) and is a unique data set for studying expression based structural and functional organization of the brain. The goal of this study was to facilitate an unbiased data-driven structural partitioning of the major structures in the mouse brain. We have developed an algorithm that uses nonnegative matrix factorization (NMF) to perform parts based analysis of ISH gene expression images. The standard NMF approach and its variants are limited in their ability to flexibly integrate prior knowledge, in the context of spatial data. In this paper, we introduce spatial connectivity as an additional regularization in NMF decomposition via the use of Markov Random Fields (mNMF). The mNMF algorithm alternates neighborhood updates with iterations of the standard NMF algorithm to exploit spatial correlations in the data. We present the algorithm and show the sub-divisions of hippocampus and somatosensory-cortex obtained via this approach. The results are compared with established neuroanatomic knowledge. We also highlight novel gene expression based sub divisions of the hippocampus identified by using the mNMF algorithm.
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
NASA Astrophysics Data System (ADS)
Morizet, N.; Godin, N.; Tang, J.; Maillet, E.; Fregonese, M.; Normand, B.
2016-03-01
This paper aims to propose a novel approach to classify acoustic emission (AE) signals deriving from corrosion experiments, even if embedded into a noisy environment. To validate this new methodology, synthetic data are first used throughout an in-depth analysis, comparing Random Forests (RF) to the k-Nearest Neighbor (k-NN) algorithm. Moreover, a new evaluation tool called the alter-class matrix (ACM) is introduced to simulate different degrees of uncertainty on labeled data for supervised classification. Then, tests on real cases involving noise and crevice corrosion are conducted, by preprocessing the waveforms including wavelet denoising and extracting a rich set of features as input of the RF algorithm. To this end, a software called RF-CAM has been developed. Results show that this approach is very efficient on ground truth data and is also very promising on real data, especially for its reliability, performance and speed, which are serious criteria for the chemical industry.
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
Czégel, Dániel; Palla, Gergely
2015-01-01
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology. PMID:26657012
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
NASA Astrophysics Data System (ADS)
Czégel, Dániel; Palla, Gergely
2015-12-01
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.
Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?
Czégel, Dániel; Palla, Gergely
2015-12-10
Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.
Scaled Particle Theory for Multicomponent Hard Sphere Fluids Confined in Random Porous Media.
Chen, W; Zhao, S L; Holovko, M; Chen, X S; Dong, W
2016-06-23
The formulation of scaled particle theory (SPT) is presented for a quite general model of fluids confined in a random porous media, i.e., a multicomponent hard sphere (HS) fluid in a multicomponent hard sphere or a multicomponent overlapping hard sphere (OHS) matrix. The analytical expressions for pressure, Helmholtz free energy, and chemical potential are derived. The thermodynamic consistency of the proposed theory is established. Moreover, we show that there is an isomorphism between the SPT for a multicomponent system and that for a one-component system. Results from grand canonical ensemble Monte Carlo simulations are also presented for a binary HS mixture in a one-component HS or a one-component OHS matrix. The accuracy of various variants derived from the basic SPT formulation is appraised against the simulation results. Scaled particle theory, initially formulated for a bulk HS fluid, has not only provided an analytical tool for calculating thermodynamic properties of HS fluid but also helped to gain very useful insight for elaborating other theoretical approaches such as the fundamental measure theory (FMT). We expect that the general SPT for multicomponent systems developed in this work can contribute to the study of confined fluids in a similar way.
Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model
NASA Astrophysics Data System (ADS)
Kanazawa, Takuya; Kieburg, Mario
2018-06-01
We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.
NASA Astrophysics Data System (ADS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2010-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profiles derived from the satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and cloud profiling radar. The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical profiles can be related by a cloud overlap matrix when the correlation length of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches random overlap with increasing distance separating cloud layers and that the probability of deviating from random overlap decreases exponentially with distance. One month of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat data (July 2006) support these assumptions, although the correlation length sometimes increases with separation distance when the cloud top height is large. The data also show that the correlation length depends on cloud top hight and the maximum occurs when the cloud top height is 8 to 10 km. The cloud correlation length is equivalent to the decorrelation distance introduced by Hogan and Illingworth (2000) when cloud fractions of both layers in a two-cloud layer system are the same. The simple relationships derived in this study can be used to estimate the top-of-atmosphere irradiance difference caused by cloud fraction, uppermost cloud top, and cloud thickness vertical profile differences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko
2015-11-10
This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Random matrix theory and portfolio optimization in Moroccan stock exchange
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane
2015-09-01
In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
The supersymmetric method in random matrix theory and applications to QCD
NASA Astrophysics Data System (ADS)
Verbaarschot, Jacobus
2004-12-01
The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
NASA Astrophysics Data System (ADS)
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-12-01
We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.
Minimally invasive esthetic ridge preservation with growth-factor enhanced bone matrix.
Nevins, Marc L; Said, Sherif
2017-12-28
Extraction socket preservation procedures are critical to successful esthetic implant therapy. Conventional surgical approaches are technique sensitive and often result in alteration of the soft tissue architecture, which then requires additional corrective surgical procedures. This case series report presents the ability of flapless surgical techniques combined with a growth factor-enhanced bone matrix to provide esthetic ridge preservation at the time of extraction for compromised sockets. When considering esthetic dental implant therapy, preservation, or further enhancement of the available tissue support at the time of tooth extraction may provide an improved esthetic outcome with reduced postoperative sequelae and decreased treatment duration. Advances in minimally invasive surgical techniques combined with recombinant growth factor technology offer an alternative for bone reconstruction while maintaining the gingival architecture for enhanced esthetic outcome. The combination of freeze-dried bone allograft (FDBA) and rhPDGF-BB (platelet-derived growth factor-BB) provides a growth-factor enhanced matrix to induce bone and soft tissue healing. The use of a growth-factor enhanced matrix is an option for minimally invasive ridge preservation procedures for sites with advanced bone loss. Further studies including randomized clinical trials are needed to better understand the extent and limits of these procedures. The use of minimally invasive techniques with growth factors for esthetic ridge preservation reduces patient morbidity associated with more invasive approaches and increases the predictability for enhanced patient outcomes. By reducing the need for autogenous bone grafts the use of this technology is favorable for patient acceptance and ease of treatment process for esthetic dental implant therapy. © 2017 Wiley Periodicals, Inc.
Finding a Hadamard matrix by simulated annealing of spin vectors
NASA Astrophysics Data System (ADS)
Bayu Suksmono, Andriyan
2017-05-01
Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.
Portfolio optimization and the random magnet problem
NASA Astrophysics Data System (ADS)
Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.
2002-08-01
Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Analytic approximation for random muffin-tin alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, R.; Gray, L.J.; Kaplan, T.
1983-03-15
The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less
NASA Astrophysics Data System (ADS)
Longbiao, Li
2015-12-01
The matrix multicracking evolution of cross-ply ceramic-matrix composites (CMCs) has been investigated using energy balance approach. The multicracking of cross-ply CMCs was classified into five modes, i.e., (1) mode 1: transverse multicracking; (2) mode 2: transverse multicracking and matrix multicracking with perfect fiber/matrix interface bonding; (3) mode 3: transverse multicracking and matrix multicracking with fiber/matrix interface debonding; (4) mode 4: matrix multicracking with perfect fiber/matrix interface bonding; and (5) mode 5: matrix multicracking with fiber/matrix interface debonding. The stress distributions of four cracking modes, i.e., mode 1, mode 2, mode 3 and mode 5, are analysed using shear-lag model. The matrix multicracking evolution of mode 1, mode 2, mode 3 and mode 5, has been determined using energy balance approach. The effects of ply thickness and fiber volume fraction on matrix multicracking evolution of cross-ply CMCs have been investigated.
Forecasting extinction risk with nonstationary matrix models.
Gotelli, Nicholas J; Ellison, Aaron M
2006-02-01
Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
Kumar, Keshav
2017-11-01
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
Convergence to equilibrium under a random Hamiltonian.
Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Convergence to equilibrium under a random Hamiltonian
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Intermediate quantum maps for quantum computation
NASA Astrophysics Data System (ADS)
Giraud, O.; Georgeot, B.
2005-10-01
We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.
Migration of lymphocytes on fibronectin-coated surfaces: temporal evolution of migratory parameters
NASA Technical Reports Server (NTRS)
Bergman, A. J.; Zygourakis, K.; McIntire, L. V. (Principal Investigator)
1999-01-01
Lymphocytes typically interact with implanted biomaterials through adsorbed exogenous proteins. To provide a more complete characterization of these interactions, analysis of lymphocyte migration on adsorbed extracellular matrix proteins must accompany the commonly performed adhesion studies. We report here a comparison of the migratory and adhesion behavior of Jurkat cells (a T lymphoblastoid cell line) on tissue culture treated and untreated polystyrene surfaces coated with various concentrations of fibronectin. The average speed of cell locomotion showed a biphasic response to substrate adhesiveness for cells migrating on untreated polystyrene and a monotonic decrease for cells migrating on tissue culture-treated polystyrene. A modified approach to the persistent random walk model was implemented to determine the time dependence of cell migration parameters. The random motility coefficient showed significant increases with time when cells migrated on tissue culture-treated polystyrene surfaces, while it remained relatively constant for experiments with untreated polystyrene plates. Finally, a cell migration computer model was developed to verify our modified persistent random walk analysis. Simulation results suggest that our experimental data were consistent with temporally increasing random motility coefficients.
To cut or not to cut? Assessing the modular structure of brain networks.
Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M
2014-05-01
A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimation of regionalized compositions: A comparison of three methods
Pawlowsky, V.; Olea, R.A.; Davis, J.C.
1995-01-01
A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
NASA Astrophysics Data System (ADS)
Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.
2018-03-01
In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.
Statistical mechanics of homogeneous partly pinned fluid systems.
Krakoviack, Vincent
2010-12-01
The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.
Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang
2015-01-01
Abstract. Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens. PMID:26057029
Adjoints and Low-rank Covariance Representation
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.
2000-01-01
Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.
Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang
2015-06-01
Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.
NASA Astrophysics Data System (ADS)
Wirtz, Tim; Kieburg, Mario; Guhr, Thomas
2017-06-01
The correlated Wishart model provides the standard benchmark when analyzing time series of any kind. Unfortunately, the real case, which is the most relevant one in applications, poses serious challenges for analytical calculations. Often these challenges are due to square root singularities which cannot be handled using common random matrix techniques. We present a new way to tackle this issue. Using supersymmetry, we carry out an anlaytical study which we support by numerical simulations. For large but finite matrix dimensions, we show that statistical properties of the fully correlated real Wishart model generically approach those of a correlated real Wishart model with doubled matrix dimensions and doubly degenerate empirical eigenvalues. This holds for the local and global spectral statistics. With Monte Carlo simulations we show that this is even approximately true for small matrix dimensions. We explicitly investigate the k-point correlation function as well as the distribution of the largest eigenvalue for which we find a surprisingly compact formula in the doubly degenerate case. Moreover we show that on the local scale the k-point correlation function exhibits the sine and the Airy kernel in the bulk and at the soft edges, respectively. We also address the positions and the fluctuations of the possible outliers in the data.
The Evidence Value Matrix for Diagnostic Imaging.
Seidel, David; Frank, Richard A; Schmidt, Sebastian
2016-10-01
Evidence and value are independent factors that together affect the adoption of diagnostic imaging. For example, noncoverage decisions by reimbursement authorities can be justified by a lack of evidence and/or value. To create transparency and a common understanding among various stakeholders, we have proposed a two-dimensional matrix that allows classification of imaging devices into three distinct categories based on the available evidence and value: "question marks" (low value demonstrated in studies of any evidence level), "candidates" (high value demonstrated in retrospective case-control studies and smaller case series), and "stars" (high value demonstrated in large prospective cohort studies or, preferably, randomized controlled trials). We use several examples to illustrate the application of our matrix. A major benefit of the matrix includes the development of specific strategies for evidence and value generation. High-evidence/low-value studies are expensive and unlikely to convince decision makers, given the uncertainty of the impact on patient management and outcomes. Developing question marks into candidates first and then into stars will often be quicker and less expensive ("success sequence"). Only this more sophisticated and objective approach can justify the additional funding necessary to generate the evidence base to inform reimbursement by payers and adoption by providers. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ivliev, S. V.
2017-12-01
For calculation of short laser pulse absorption in metal the imaginary part of permittivity, which is simply related to the conductivity, is required. Currently to find the static and dynamic conductivity the Kubo-Greenwood formula is most commonly used. It describes the electromagnetic energy absorption in the one-electron approach. In the present study, this formula is derived directly from the expression for the permittivity expression in the random phase approximation, which in fact is equivalent to the method of the mean field. The detailed analysis of the role of electron-electron interaction in the calculation of the matrix elements of the velocity operator is given. It is shown that in the one-electron random phase approximation the single-particle conductive electron wave functions in the field of fixed ions should be used. The possibility of considering the exchange and correlation effects by means of an amendment to a local function field is discussed.
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Du; Yang, Weitao
An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less
Zhang, Du; Yang, Weitao
2016-10-13
An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faessler, Amand; Rodin, V.; Fogli, G. L.
2009-03-01
The variances and covariances associated to the nuclear matrix elements of neutrinoless double beta decay (0{nu}{beta}{beta}) are estimated within the quasiparticle random phase approximation. It is shown that correlated nuclear matrix elements uncertainties play an important role in the comparison of 0{nu}{beta}{beta} decay rates for different nuclei, and that they are degenerate with the uncertainty in the reconstructed Majorana neutrino mass.
A new Hessian - based approach for segmentation of CT porous media images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Kirill, Gerke
2017-04-01
Hessian matrix based methods are widely used in image analysis for features detection, e.g., detection of blobs, corners and edges. Hessian matrix of the imageis the matrix of 2nd order derivate around selected voxel. Most significant features give highest values of Hessian transform and lowest values are located at smoother parts of the image. Majority of conventional segmentation techniques can segment out cracks, fractures and other inhomogeneities in soils and rocks only if the rest of the image is significantly "oversigmented". To avoid this disadvantage, we propose to enhance greyscale values of voxels belonging to such specific inhomogeneities on X-ray microtomography scans. We have developed and implemented in code a two-step approach to attack the aforementioned problem. During the first step we apply a filter that enhances the image and makes outstanding features more sharply defined. During the second step we apply Hessian filter based segmentation. The values of voxels on the image to be segmented are calculated in conjunction with the values of other voxels within prescribed region. Contribution from each voxel within such region is computed by weighting according to the local Hessian matrix value. We call this approach as Hessian windowed segmentation. Hessian windowed segmentation has been tested on different porous media X-ray microtomography images, including soil, sandstones, carbonates and shales. We also compared this new method against others widely used methods such as kriging, Markov random field, converging active contours and region grow. We show that our approach is more accurate in regions containing special features such as small cracks, fractures, elongated inhomogeneities and other features with low contrast related to the background solid phase. Moreover, Hessian windowed segmentation outperforms some of these methods in computational efficiency. We further test our segmentation technique by computing permeability of segmented images and comparing them against laboratory based measurements. This work was partially supported by RFBR grant 15-34-20989 (X-ray tomography and image fusion) and RSF grant 14-17-00658 (image segmentation and pore-scale modelling).
Localized motion in random matrix decomposition of complex financial systems
NASA Astrophysics Data System (ADS)
Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian
2017-04-01
With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.
Emergence of small-world structure in networks of spiking neurons through STDP plasticity.
Basalyga, Gleb; Gleiser, Pablo M; Wennekers, Thomas
2011-01-01
In this work, we use a complex network approach to investigate how a neural network structure changes under synaptic plasticity. In particular, we consider a network of conductance-based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons are connected randomly with uniformly distributed synaptic weights. The weights of excitatory connections can be strengthened or weakened during spiking activity by the mechanism known as spike-timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding the weights of the excitatory connections at every simulation step and calculate its major topological characteristics such as the network clustering coefficient, characteristic path length and small-world index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure can emerge from a random initial network subject to STDP learning.
Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas
2008-04-01
A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.
Weak scattering of scalar and electromagnetic random fields
NASA Astrophysics Data System (ADS)
Tong, Zhisong
This dissertation encompasses several studies relating to the theory of weak potential scattering of scalar and electromagnetic random, wide-sense statistically stationary fields from various types of deterministic or random linear media. The proposed theory is largely based on the first Born approximation for potential scattering and on the angular spectrum representation of fields. The main focus of the scalar counterpart of the theory is made on calculation of the second-order statistics of scattered light fields in cases when the scattering medium consists of several types of discrete particles with deterministic or random potentials. It is shown that the knowledge of the correlation properties for the particles of the same and different types, described with the newly introduced pair-scattering matrix, is crucial for determining the spectral and coherence states of the scattered radiation. The approach based on the pair-scattering matrix is then used for solving an inverse problem of determining the location of an "alien" particle within the scattering collection of "normal" particles, from several measurements of the spectral density of scattered light. Weak scalar scattering of light from a particulate medium in the presence of optical turbulence existing between the scattering centers is then approached using the combination of the Born's theory for treating the light interaction with discrete particles and the Rytov's theory for light propagation in extended turbulent medium. It is demonstrated how the statistics of scattered radiation depend on scattering potentials of particles and the power spectra of the refractive index fluctuations of turbulence. This theory is of utmost importance for applications involving atmospheric and oceanic light transmission. The second part of the dissertation includes the theoretical procedure developed for predicting the second-order statistics of the electromagnetic random fields, such as polarization and linear momentum, scattered from static media. The spatial distribution of these properties of scattered fields is shown to be substantially dependent on the correlation and polarization properties of incident fields and on the statistics of the refractive index distribution within the scatterers. Further, an example is considered which illustrates the usefulness of the electromagnetic scattering theory of random fields in the case when the scattering medium is a thin bio-tissue layer with the prescribed power spectrum of the refractive index fluctuations. The polarization state of the scattered light is shown to be influenced by correlation and polarization states of the illumination as well as by the particle size distribution of the tissue slice.
Hessian eigenvalue distribution in a random Gaussian landscape
NASA Astrophysics Data System (ADS)
Yamada, Masaki; Vilenkin, Alexander
2018-03-01
The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/ N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.
A null model for Pearson coexpression networks.
Gobbi, Andrea; Jurman, Giuseppe
2015-01-01
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
Tensor Minkowski Functionals for random fields on the sphere
NASA Astrophysics Data System (ADS)
Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom
2017-12-01
We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
Donoho, David L; Gavish, Matan; Montanari, Andrea
2013-05-21
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.
Dharan, Smitha; Nair, Achuthsankar S
2009-01-30
Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure.
Muhlestein, Michael B; Haberman, Michael R
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
Haberman, Michael R.
2016-01-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed. PMID:27616932
A micromechanical approach for homogenization of elastic metamaterials with dynamic microstructure
NASA Astrophysics Data System (ADS)
Muhlestein, Michael B.; Haberman, Michael R.
2016-08-01
An approximate homogenization technique is presented for generally anisotropic elastic metamaterials consisting of an elastic host material containing randomly distributed heterogeneities displaying frequency-dependent material properties. The dynamic response may arise from relaxation processes such as viscoelasticity or from dynamic microstructure. A Green's function approach is used to model elastic inhomogeneities embedded within a uniform elastic matrix as force sources that are excited by a time-varying, spatially uniform displacement field. Assuming dynamic subwavelength inhomogeneities only interact through their volume-averaged fields implies the macroscopic stress and momentum density fields are functions of both the microscopic strain and velocity fields, and may be related to the macroscopic strain and velocity fields through localization tensors. The macroscopic and microscopic fields are combined to yield a homogenization scheme that predicts the local effective stiffness, density and coupling tensors for an effective Willis-type constitutive equation. It is shown that when internal degrees of freedom of the inhomogeneities are present, Willis-type coupling becomes necessary on the macroscale. To demonstrate the utility of the homogenization technique, the effective properties of an isotropic elastic matrix material containing isotropic and anisotropic spherical inhomogeneities, isotropic spheroidal inhomogeneities and isotropic dynamic spherical inhomogeneities are presented and discussed.
NASA Astrophysics Data System (ADS)
Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio
2013-05-01
Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Fiber Contraction Approaches for Improving CMC Proportional Limit
NASA Technical Reports Server (NTRS)
DiCarlo, James A.; Yun, Hee Mann
1997-01-01
The fact that the service life of ceramic matrix composites (CMC) decreases dramatically for stresses above the CMC proportional limit has triggered a variety of research activities to develop microstructural approaches that can significantly improve this limit. As discussed in a previous report, both local and global approaches exist for hindering the propagation of cracks through the CMC matrix, the physical source for the proportional limit. Local approaches include: (1) minimizing fiber diameter and matrix modulus; (2) maximizing fiber volume fraction, fiber modulus, and matrix toughness; and (3) optimizing fiber-matrix interfacial shear strength; all of which should reduce the stress concentration at the tip of cracks pre existing or created in the matrix during CMC service. Global approaches, as with pre-stressed concrete, center on seeking mechanisms for utilizing the reinforcing fiber to subject the matrix to in-situ compressive stresses which will remain stable during CMC service. Demonstrated CMC examples for the viability of this residual stress approach are based on strain mismatches between the fiber and matrix in their free states, such as, thermal expansion mismatch and creep mismatch. However, these particular mismatch approaches are application limited in that the residual stresses from expansion mismatch are optimum only at low CMC service temperatures and the residual stresses from creep mismatch are typically unidirectional and difficult to implement in complex-shaped CMC.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
A Deep Stochastic Model for Detecting Community in Complex Networks
NASA Astrophysics Data System (ADS)
Fu, Jingcheng; Wu, Jianliang
2017-01-01
Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.
RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchel, K.; Greibach, R.J.; Porter, C.E.
A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)
Complex Langevin simulation of a random matrix model at nonzero chemical potential
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...
2018-03-06
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Complex Langevin simulation of a random matrix model at nonzero chemical potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
A computational model of in vitro angiogenesis based on extracellular matrix fibre orientation.
Edgar, Lowell T; Sibole, Scott C; Underwood, Clayton J; Guilkey, James E; Weiss, Jeffrey A
2013-01-01
Recent interest in the process of vascularisation within the biomedical community has motivated numerous new research efforts focusing on the process of angiogenesis. Although the role of chemical factors during angiogenesis has been well documented, the role of mechanical factors, such as the interaction between angiogenic vessels and the extracellular matrix, remains poorly understood. In vitro methods for studying angiogenesis exist; however, measurements available using such techniques often suffer from limited spatial and temporal resolutions. For this reason, computational models have been extensively employed to investigate various aspects of angiogenesis. This paper outlines the formulation and validation of a simple and robust computational model developed to accurately simulate angiogenesis based on length, branching and orientation morphometrics collected from vascularised tissue constructs. Microvessels were represented as a series of connected line segments. The morphology of the vessels was determined by a linear combination of the collagen fibre orientation, the vessel density gradient and a random walk component. Excellent agreement was observed between computational and experimental morphometric data over time. Computational predictions of microvessel orientation within an anisotropic matrix correlated well with experimental data. The accuracy of this modelling approach makes it a valuable platform for investigating the role of mechanical interactions during angiogenesis.
Entropic trapping of macromolecules by mesoscopic periodic voids in a polymer hydrogel
NASA Astrophysics Data System (ADS)
Liu, Lei; Li, Pusheng; Asher, Sanford A.
1999-01-01
The separation of macromolecules such as polymers and DNA by means of electrophoresis, gel permeation chromatography or filtration exploits size-dependent differences in the time it takes for the molecules to migrate through a random porous network. Transport through the gel matrices, which usually consist of full swollen crosslinked polymers, depends on the relative size of the macromolecule compared with the pore radius. Sufficiently small molecules are thought to adopt an approximately spherical conformation when diffusing through the gel matrix, whereas larger ones are forced to migrate in a snake-like fashion. Molecules of intermediate size, however, can get temporarily trapped in the largest pores of the matrix, where the molecule can extend and thus maximize its conformational entropy. This `entropic trapping' is thought to increase the dependence of diffusion rate on molecular size. Here we report the direct experimental verification of this phenomenon. Bragg diffraction from a hydrogel containing a periodic array of monodisperse water voids confirms that polymers of different weights partition between the hydrogel matrix and the water voids according to the predictions of the entropic trapping theory. Our approach might also lead to the design of improved separation media based on entropic trapping.
Modifying Matrix Materials to Increase Wetting and Adhesion
NASA Technical Reports Server (NTRS)
Zhong, Katie
2011-01-01
In an alternative approach to increasing the degrees of wetting and adhesion between the fiber and matrix components of organic-fiber/polymer matrix composite materials, the matrix resins are modified. Heretofore, it has been common practice to modify the fibers rather than the matrices: The fibers are modified by chemical and/or physical surface treatments prior to combining the fibers with matrix resins - an approach that entails considerable expense and usually results in degradation (typically, weakening) of fibers. The alternative approach of modifying the matrix resins does not entail degradation of fibers, and affords opportunities for improving the mechanical properties of the fiber composites. The alternative approach is more cost-effective, not only because it eliminates expensive fiber-surface treatments but also because it does not entail changes in procedures for manufacturing conventional composite-material structures. The alternative approach is best described by citing an example of its application to a composite of ultra-high-molecular- weight polyethylene (UHMWPE) fibers in an epoxy matrix. The epoxy matrix was modified to a chemically reactive, polarized epoxy nano-matrix to increase the degrees of wetting and adhesion between the fibers and the matrix. The modification was effected by incorporating a small proportion (0.3 weight percent) of reactive graphitic nanofibers produced from functionalized nanofibers into the epoxy matrix resin prior to combining the resin with the UHMWPE fibers. The resulting increase in fiber/matrix adhesion manifested itself in several test results, notably including an increase of 25 percent in the maximum fiber pullout force and an increase of 60-65 percent in fiber pullout energy. In addition, it was conjectured that the functionalized nanofibers became involved in the cross linking reaction of the epoxy resin, with resultant enhancement of the mechanical properties and lower viscosity of the matrix.
Entanglement spectrum of random-singlet quantum critical points
NASA Astrophysics Data System (ADS)
Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.
2011-01-01
The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.
Random matrix theory and fund of funds portfolio optimisation
NASA Astrophysics Data System (ADS)
Conlon, T.; Ruskin, H. J.; Crane, M.
2007-08-01
The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.
Dynamic laser speckle analyzed considering inhomogeneities in the biological sample
NASA Astrophysics Data System (ADS)
Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico
2017-04-01
Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Communication Optimal Parallel Multiplication of Sparse Random Matrices
2013-02-21
Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That
Nonlinear wave chaos: statistics of second harmonic fields.
Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M
2017-10-01
Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.
NASA Astrophysics Data System (ADS)
Cai, Jizhe; Naraghi, Mohammad
2016-08-01
In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.
Random matrix approach to the dynamics of stock inventory variations
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Peng, Chen
2017-09-01
The augmented multi-indexed matrix approach acts as a powerful tool in reducing the conservatism of control synthesis of discrete-time Takagi-Sugeno fuzzy systems. However, its computational burden is sometimes too heavy as a tradeoff. Nowadays, reducing the conservatism whilst alleviating the computational burden becomes an ideal but very challenging problem. This paper is toward finding an efficient way to achieve one of satisfactory answers. Different from the augmented multi-indexed matrix approach in the literature, we aim to design a more efficient slack variable approach under a general framework of homogenous matrix polynomials. Thanks to the introduction of a new extended representation for homogeneous matrix polynomials, related matrices with the same coefficient are collected together into one sole set and thus those redundant terms of the augmented multi-indexed matrix approach can be removed, i.e., the computational burden can be alleviated in this paper. More importantly, due to the fact that more useful information is involved into control design, the conservatism of the proposed approach as well is less than the counterpart of the augmented multi-indexed matrix approach. Finally, numerical experiments are given to show the effectiveness of the proposed approach.
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k
Structure of local interactions in complex financial dynamics
Jiang, X. F.; Chen, T. T.; Zheng, B.
2014-01-01
With the network methods and random matrix theory, we investigate the interaction structure of communities in financial markets. In particular, based on the random matrix decomposition, we clarify that the local interactions between the business sectors (subsectors) are mainly contained in the sector mode. In the sector mode, the average correlation inside the sectors is positive, while that between the sectors is negative. Further, we explore the time evolution of the interaction structure of the business sectors, and observe that the local interaction structure changes dramatically during a financial bubble or crisis. PMID:24936906
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Copolymers For Capillary Gel Electrophoresis
Liu, Changsheng; Li, Qingbo
2005-08-09
This invention relates to an electrophoresis separation medium having a gel matrix of at least one random, linear copolymer comprising a primary comonomer and at least one secondary comonomer, wherein the comonomers are randomly distributed along the copolymer chain. The primary comonomer is an acrylamide or an acrylamide derivative that provides the primary physical, chemical, and sieving properties of the gel matrix. The at least one secondary comonomer imparts an inherent physical, chemical, or sieving property to the copolymer chain. The primary and secondary comonomers are present in a ratio sufficient to induce desired properties that optimize electrophoresis performance. The invention also relates to a method of separating a mixture of biological molecules using this gel matrix, a method of preparing the novel electrophoresis separation medium, and a capillary tube filled with the electrophoresis separation medium.
Removal of Stationary Sinusoidal Noise from Random Vibration Signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian; Cap, Jerome S.
In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less
Simple Emergent Power Spectra from Complex Inflationary Physics
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2016-09-01
We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.
Simple Emergent Power Spectra from Complex Inflationary Physics.
Dias, Mafalda; Frazer, Jonathan; Marsh, M C David
2016-09-30
We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.
Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams
NASA Technical Reports Server (NTRS)
Mackowski, Daniel W.; Mishchenko, Michael I.
2011-01-01
The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.
N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao
2017-05-31
Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.
Fractional quantum mechanics on networks: Long-range dynamics and quantum transport
NASA Astrophysics Data System (ADS)
Riascos, A. P.; Mateos, José L.
2015-11-01
In this paper we study the quantum transport on networks with a temporal evolution governed by the fractional Schrödinger equation. We generalize the dynamics based on continuous-time quantum walks, with transitions to nearest neighbors on the network, to the fractional case that allows long-range displacements. By using the fractional Laplacian matrix of a network, we establish a formalism that combines a long-range dynamics with the quantum superposition of states; this general approach applies to any type of connected undirected networks, including regular, random, and complex networks, and can be implemented from the spectral properties of the Laplacian matrix. We study the fractional dynamics and its capacity to explore the network by means of the transition probability, the average probability of return, and global quantities that characterize the efficiency of this quantum process. As a particular case, we explore analytically these quantities for circulant networks such as rings, interacting cycles, and complete graphs.
ePix: a class of architectures for second generation LCLS cameras
Dragone, A.; Caragiulo, P.; Markovic, B.; ...
2014-03-31
ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less
Fractional quantum mechanics on networks: Long-range dynamics and quantum transport.
Riascos, A P; Mateos, José L
2015-11-01
In this paper we study the quantum transport on networks with a temporal evolution governed by the fractional Schrödinger equation. We generalize the dynamics based on continuous-time quantum walks, with transitions to nearest neighbors on the network, to the fractional case that allows long-range displacements. By using the fractional Laplacian matrix of a network, we establish a formalism that combines a long-range dynamics with the quantum superposition of states; this general approach applies to any type of connected undirected networks, including regular, random, and complex networks, and can be implemented from the spectral properties of the Laplacian matrix. We study the fractional dynamics and its capacity to explore the network by means of the transition probability, the average probability of return, and global quantities that characterize the efficiency of this quantum process. As a particular case, we explore analytically these quantities for circulant networks such as rings, interacting cycles, and complete graphs.
Density-matrix based determination of low-energy model Hamiltonians from ab initio wavefunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Changlani, Hitesh J.; Zheng, Huihuo; Wagner, Lucas K.
2015-09-14
We propose a way of obtaining effective low energy Hubbard-like model Hamiltonians from ab initio quantum Monte Carlo calculations for molecular and extended systems. The Hamiltonian parameters are fit to best match the ab initio two-body density matrices and energies of the ground and excited states, and thus we refer to the method as ab initio density matrix based downfolding. For benzene (a finite system), we find good agreement with experimentally available energy gaps without using any experimental inputs. For graphene, a two dimensional solid (extended system) with periodic boundary conditions, we find the effective on-site Hubbard U{sup ∗}/t tomore » be 1.3 ± 0.2, comparable to a recent estimate based on the constrained random phase approximation. For molecules, such parameterizations enable calculation of excited states that are usually not accessible within ground state approaches. For solids, the effective Hamiltonian enables large-scale calculations using techniques designed for lattice models.« less
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Kim, Maru; Song, In-Guk; Kim, Hyung Jin
2015-06-01
The aim of this study was to compare the result of electrocauterization and curettage, which can be done with basic instruments. Patients with ingrown nail were randomized to 2 groups. In the first group, nail matrix was removed by curettage, and the second group, nail matrix was removed by electrocautery. A total of 61 patients were enrolled; 32 patients were operated by curettage, and 29 patients were operated by electrocautery. Wound infections, as early complication, were found in 15.6% (5/32) of the curettage group, 10.3% (3/29) of the electrocautery group patients each (P = .710). Nonrecurrence was observed in 93.8% (30/32) and 86.2% (25/29) of the curettage and electrocautery groups, respectively, (lower limit of 1-sided 90% confidence interval = -2.3% > -15% [noninferiority margin]). To remove nail matrix, the curettage is effective as well as the electrocauterization. Further study is required to determine the differences between the procedures. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure
Dharan, Smitha; Nair, Achuthsankar S
2009-01-01
Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Han, Rui-Qi; Xie, Wen-Jie; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
The correlation structure of a stock market contains important financial contents, which may change remarkably due to the occurrence of financial crisis. We perform a comparative analysis of the Chinese stock market around the occurrence of the 2008 crisis based on the random matrix analysis of high-frequency stock returns of 1228 Chinese stocks. Both raw correlation matrix and partial correlation matrix with respect to the market index in two time periods of one year are investigated. We find that the Chinese stocks have stronger average correlation and partial correlation in 2008 than in 2007 and the average partial correlation is significantly weaker than the average correlation in each period. Accordingly, the largest eigenvalue of the correlation matrix is remarkably greater than that of the partial correlation matrix in each period. Moreover, each largest eigenvalue and its eigenvector reflect an evident market effect, while other deviating eigenvalues do not. We find no evidence that deviating eigenvalues contain industrial sectorial information. Surprisingly, the eigenvectors of the second largest eigenvalues in 2007 and of the third largest eigenvalues in 2008 are able to distinguish the stocks from the two exchanges. We also find that the component magnitudes of the some largest eigenvectors are proportional to the stocks’ capitalizations.
Engineering controllable architecture in matrigel for 3D cell alignment.
Jang, Jae Myung; Tran, Si-Hoai-Trung; Na, Sang Cheol; Jeon, Noo Li
2015-02-04
We report a microfluidic approach to impart alignment in ECM components in 3D hydrogels by continuously applying fluid flow across the bulk gel during the gelation process. The microfluidic device where each channel can be independently filled was tilted at 90° to generate continuous flow across the Matrigel as it gelled. The presence of flow helped that more than 70% of ECM components were oriented along the direction of flow, compared with randomly cross-linked Matrigel. Following the oriented ECM components, primary rat cortical neurons and mouse neural stem cells showed oriented outgrowth of neuronal processes within the 3D Matrigel matrix.
Schur polynomials and biorthogonal random matrix ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tierz, Miguel
The study of the average of Schur polynomials over a Stieltjes-Wigert ensemble has been carried out by Dolivet and Tierz [J. Math. Phys. 48, 023507 (2007); e-print arXiv:hep-th/0609167], where it was shown that it is equal to quantum dimensions. Using the same approach, we extend the result to the biorthogonal case. We also study, using the Littlewood-Richardson rule, some particular cases of the quantum dimension result. Finally, we show that the notion of Giambelli compatibility of Schur averages, introduced by Borodin et al. [Adv. Appl. Math. 37, 209 (2006); e-print arXiv:math-ph/0505021], also holds in the biorthogonal setting.
Finite-size effects on the static properties of a single-chain magnet
NASA Astrophysics Data System (ADS)
Bogani, L.; Sessoli, R.; Pini, M. G.; Rettori, A.; Novak, M. A.; Rosa, P.; Massi, M.; Fedi, M. E.; Giuntini, L.; Caneschi, A.; Gatteschi, D.
2005-08-01
We study the role of defects in the “single-chain magnet” CoPhOMe by inserting a controlled number of diamagnetic impurities. The samples are analyzed with unprecedented accuracy with the particle induced x-ray emission technique, and with ac and dc magnetic measurements. In an external applied field the system shows an unexpected behavior, giving rise to a double peak in the susceptibility. The static thermodynamic properties of the randomly diluted Ising chain with alternating g values are then exactly obtained via a transfer matrix approach. These results are compared to the experimental behavior of CoPhOMe, showing qualitative agreement.
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.
1980-12-15
random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp
SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations
2013-01-01
Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408
SUNPLIN: simulation with uncertainty for phylogenetic investigations.
Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F
2013-11-15
Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Fluctuations in an established transmission in the presence of a complex environment
NASA Astrophysics Data System (ADS)
Savin, Dmitry V.; Richter, Martin; Kuhl, Ulrich; Legrand, Olivier; Mortessagne, Fabrice
2017-09-01
In various situations where wave transport is preeminent, like in wireless communication, a strong established transmission is present in a complex scattering environment. We develop a nonperturbative approach to describe emerging fluctuations which combines a transmitting channel and a chaotic background in a unified effective Hamiltonian. Modeling such a background by random matrix theory, we derive exact results for both transmission and reflection distributions at arbitrary absorption that is typically present in real systems. Remarkably, in such a complex scattering situation, the transport is governed by only two parameters: an absorption rate and the ratio of the so-called spreading width to the natural width of the transmission line. In particular, we find that the established transmission disappears sharply when this ratio exceeds unity. The approach exemplifies the role of the chaotic background in dephasing the deterministic scattering.
NASA Astrophysics Data System (ADS)
Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang
2014-07-01
This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.
Characterizations of matrix and operator-valued Φ-entropies, and operator Efron-Stein inequalities.
Cheng, Hao-Chung; Hsieh, Min-Hsiu
2016-03-01
We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19 , 1-30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron-Stein inequality.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
NASA Astrophysics Data System (ADS)
Lv, Zhong; Chen, Huisu
2014-10-01
Autonomous healing of cracks using pre-embedded capsules containing healing agent is becoming a promising approach to restore the strength of damaged structures. In addition to the material properties, the size and volume fraction of capsules influence crack healing in the matrix. Understanding the crack and capsule interaction is critical in the development and design of structures made of self-healing materials. Assuming that the pre-embedded capsules are randomly dispersed we theoretically model flat ellipsoidal crack interaction with capsules and determine the probability of a crack intersecting the pre-embedded capsules i.e. the self-healing probability. We also develop a probabilistic model of a crack simultaneously meeting with capsules and catalyst carriers in two-component self-healing system matrix. Using a risk-based healing approach, we determine the volume fraction and size of the pre-embedded capsules that are required to achieve a certain self-healing probability. To understand the effect of the shape of the capsules on self-healing we theoretically modeled crack interaction with spherical and cylindrical capsules. We compared the results of our theoretical model with Monte-Carlo simulations of crack interaction with capsules. The formulae presented in this paper will provide guidelines for engineers working with self-healing structures in material selection and sustenance.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
NASA Astrophysics Data System (ADS)
Henri, Christopher; Fernàndez-Garcia, Daniel
2015-04-01
Modeling multi-species reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterwards. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
NASA Astrophysics Data System (ADS)
Henri, Christopher V.; Fernàndez-Garcia, Daniel
2014-09-01
Modeling multispecies reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterward. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
ERIC Educational Resources Information Center
Montague, Margariete A.
This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Akemann, G; Bloch, J; Shifrin, L; Wettig, T
2008-01-25
We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Nick, H M; Paluszny, A; Blunt, M J; Matthai, S K
2011-11-01
A second order in space accurate implicit scheme for time-dependent advection-dispersion equations and a discrete fracture propagation model are employed to model solute transport in porous media. We study the impact of the fractures on mass transport and dispersion. To model flow and transport, pressure and transport equations are integrated using a finite-element, node-centered finite-volume approach. Fracture geometries are incrementally developed from a random distributions of material flaws using an adoptive geomechanical finite-element model that also produces fracture aperture distributions. This quasistatic propagation assumes a linear elastic rock matrix, and crack propagation is governed by a subcritical crack growth failure criterion. Fracture propagation, intersection, and closure are handled geometrically. The flow and transport simulations are separately conducted for a range of fracture densities that are generated by the geomechanical finite-element model. These computations show that the most influential parameters for solute transport in fractured porous media are as follows: fracture density and fracture-matrix flux ratio that is influenced by matrix permeability. Using an equivalent fracture aperture size, computed on the basis of equivalent permeability of the system, we also obtain an acceptable prediction of the macrodispersion of poorly interconnected fracture networks. The results hold for fractures at relatively low density.
Moura, Fernando Silva; Aya, Julio Cesar Ceballos; Fleury, Agenor Toledo; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez
2010-02-01
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.
Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C
2017-12-01
Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
A minimum drives automatic target definition procedure for multi-axis random control testing
NASA Astrophysics Data System (ADS)
Musella, Umberto; D'Elia, Giacomo; Carrella, Alex; Peeters, Bart; Mucchi, Emiliano; Marulo, Francesco; Guillaume, Patrick
2018-07-01
Multiple-Input Multiple-Output (MIMO) vibration control tests are able to closely replicate, via shakers excitation, the vibration environment that a structure needs to withstand during its operational life. This feature is fundamental to accurately verify the experienced stress state, and ultimately the fatigue life, of the tested structure. In case of MIMO random tests, the control target is a full reference Spectral Density Matrix in the frequency band of interest. The diagonal terms are the Power Spectral Densities (PSDs), representative for the acceleration operational levels, and the off-diagonal terms are the Cross Spectral Densities (CSDs). The specifications of random vibration tests are however often given in terms of PSDs only, coming from a legacy of single axis testing. Information about the CSDs is often missing. An accurate definition of the CSD profiles can further enhance the MIMO random testing practice, as these terms influence both the responses and the shaker's voltages (the so-called drives). The challenges are linked to the algebraic constraint that the full reference matrix must be positive semi-definite in the entire bandwidth, with no flexibility in modifying the given PSDs. This paper proposes a newly developed method that automatically provides the full reference matrix without modifying the PSDs, considered as test specifications. The innovative feature is the capability of minimizing the drives required to match the reference PSDs and, at the same time, to directly guarantee that the obtained full matrix is positive semi-definite. The drives minimization aims on one hand to reach the fixed test specifications without stressing the delicate excitation system; on the other hand it potentially allows to further increase the test levels. The detailed analytic derivation and implementation steps of the proposed method are followed by real-life testing considering different scenarios.
Bottom-Up and Top-Down Solid-State NMR Approaches for Bacterial Biofilm Matrix Composition
Cegelski, Lynette
2015-01-01
The genomics and proteomics revolutions have been enormously successful in providing crucial “parts lists” for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this Perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The “sum-of-theparts” bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by E. coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in V. cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture. PMID:25797008
Bottom-up and top-down solid-state NMR approaches for bacterial biofilm matrix composition.
Cegelski, Lynette
2015-04-01
The genomics and proteomics revolutions have been enormously successful in providing crucial "parts lists" for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The "sum-of-the-parts" bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by Escherichia coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in Vibrio cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture. Copyright © 2015 Elsevier Inc. All rights reserved.
Bottom-up and top-down solid-state NMR approaches for bacterial biofilm matrix composition
NASA Astrophysics Data System (ADS)
Cegelski, Lynette
2015-04-01
The genomics and proteomics revolutions have been enormously successful in providing crucial "parts lists" for biological systems. Yet, formidable challenges exist in generating complete descriptions of how the parts function and assemble into macromolecular complexes and whole-cell assemblies. Bacterial biofilms are complex multicellular bacterial communities protected by a slime-like extracellular matrix that confers protection to environmental stress and enhances resistance to antibiotics and host defenses. As a non-crystalline, insoluble, heterogeneous assembly, the biofilm extracellular matrix poses a challenge to compositional analysis by conventional methods. In this perspective, bottom-up and top-down solid-state NMR approaches are described for defining chemical composition in complex macrosystems. The "sum-of-the-parts" bottom-up approach was introduced to examine the amyloid-integrated biofilms formed by Escherichia coli and permitted the first determination of the composition of the intact extracellular matrix from a bacterial biofilm. An alternative top-down approach was developed to define composition in Vibrio cholerae biofilms and relied on an extensive panel of NMR measurements to tease out specific carbon pools from a single sample of the intact extracellular matrix. These two approaches are widely applicable to other heterogeneous assemblies. For bacterial biofilms, quantitative parameters of matrix composition are needed to understand how biofilms are assembled, to improve the development of biofilm inhibitors, and to dissect inhibitor modes of action. Solid-state NMR approaches will also be invaluable in obtaining parameters of matrix architecture.
Characterizations of matrix and operator-valued Φ-entropies, and operator Efron–Stein inequalities
Cheng, Hao-Chung; Hsieh, Min-Hsiu
2016-01-01
We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19, 1–30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron–Stein inequality. PMID:27118909
QCD dirac operator at nonzero chemical potential: lattice data and matrix model.
Akemann, Gernot; Wettig, Tilo
2004-03-12
Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.
Random acoustic metamaterial with a subwavelength dipolar resonance.
Duranteau, Mickaël; Valier-Brasier, Tony; Conoir, Jean-Marc; Wunenburger, Régis
2016-06-01
The effective velocity and attenuation of longitudinal waves through random dispersions of rigid, tungsten-carbide beads in an elastic matrix made of epoxy resin in the range of beads volume fraction 2%-10% are determined experimentally. The multiple scattering model proposed by Luppé, Conoir, and Norris [J. Acoust. Soc. Am. 131(2), 1113-1120 (2012)], which fully takes into account the elastic nature of the matrix and the associated mode conversions, accurately describes the measurements. Theoretical calculations show that the rigid particles display a local, dipolar resonance which shares several features with Minnaert resonance of bubbly liquids and with the dipolar resonance of core-shell particles. Moreover, for the samples under study, the main cause of smoothing of the dipolar resonance of the scatterers and the associated variations of the effective mass density of the dispersions is elastic relaxation, i.e., the finite time required for the shear stresses associated to the translational motion of the scatterers to propagate through the matrix. It is shown that its influence is governed solely by the value of the particle to matrix mass density contrast.
Random Matrix Theory in molecular dynamics analysis.
Palese, Luigi Leonardo
2015-01-01
It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.
Sousa, João Carlos; Costa, Manuel João; Palha, Joana Almeida
2010-03-01
The biochemistry and molecular biology of the extracellular matrix (ECM) is difficult to convey to students in a classroom setting in ways that capture their interest. The understanding of the matrix's roles in physiological and pathological conditions study will presumably be hampered by insufficient knowledge of its molecular structure. Internet-available resources can bridge the division between the molecular details and ECM's biological properties and associated processes. This article presents an approach to teach the ECM developed for first year medical undergraduates who, working in teams: (i) Explore a specific molecular component of the matrix, (ii) identify a disease in which the component is implicated, (iii) investigate how the component's structure/function contributes to ECM' supramolecular organization in physiological and in pathological conditions, and (iv) share their findings with colleagues. The approach-designated i-cell-MATRIX-is focused on the contribution of individual components to the overall organization and biological functions of the ECM. i-cell-MATRIX is student centered and uses 5 hours of class time. Summary of results and take home message: A "1-minute paper" has been used to gather student feedback on the impact of i-cell-MATRIX. Qualitative analysis of student feedback gathered in three consecutive years revealed that students appreciate the approach's reliance on self-directed learning, the interactivity embedded and the demand for deeper insights on the ECM. Learning how to use internet biomedical resources is another positive outcome. Ninety percent of students recommend the activity for subsequent years. i-cell-MATRIX is adaptable by other medical schools which may be looking for an approach that achieves higher student engagement with the ECM. Copyright © 2010 International Union of Biochemistry and Molecular Biology, Inc.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2011-01-01
The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.
NASA Astrophysics Data System (ADS)
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Statistical properties of the stock and credit market: RMT and network topology
NASA Astrophysics Data System (ADS)
Lim, Kyuseong; Kim, Min Jae; Kim, Sehyun; Kim, Soo Yong
We analyzed the dependence structure of the credit and stock market using random matrix theory and network topology. The dynamics of both markets have been spotlighted throughout the subprime crisis. In this study, we compared these two markets in view of the market-wide effect from random matrix theory and eigenvalue analysis. We found that the largest eigenvalue of the credit market as a whole preceded that of the stock market in the beginning of the financial crisis and that of two markets tended to be synchronized after the crisis. The correlation between the companies of both markets became considerably stronger after the crisis as well.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Quantum ergodicity in the SYK model
NASA Astrophysics Data System (ADS)
Altland, Alexander; Bagrets, Dmitry
2018-05-01
We present a replica path integral approach describing the quantum chaotic dynamics of the SYK model at large time scales. The theory leads to the identification of non-ergodic collective modes which relax and eventually give way to an ergodic long time regime (describable by random matrix theory). These modes, which play a role conceptually similar to the diffusion modes of dirty metals, carry quantum numbers which we identify as the generators of the Clifford algebra: each of the 2N different products that can be formed from N Majorana operators defines one effective mode. The competition between a decay rate quickly growing in the order of the product and a density of modes exponentially growing in the same parameter explains the characteristics of the system's approach to the ergodic long time regime. We probe this dynamics through various spectral correlation functions and obtain favorable agreement with existing numerical data.
Population newborn screening for inherited metabolic disease: current UK perspectives.
Green, A; Pollitt, R J
1999-06-01
Some of the generally accepted criteria for screening programmes are inappropriate for newborn metabolic screening as they ignore the family dimension and the importance of timely genetic information. Uncritical application of such criteria creates special difficulties for screening by tandem mass spectrometry, which can detect a range diseases with widely different natural histories and responsiveness to treatment. Further difficulties arise from increasing demands for direct proof of the effects of screening on long-term morbidity and mortality. The randomized controlled trial is held to be the gold standard, but for ethical and practical reasons it will be impossible to achieve for such relatively rare diseases. This approach also oversimplifies the complex matrix of costs and benefits of newborn metabolic screening. A more workable approach could involve Bayesian synthesis, combining quantitative performance data from carefully designed prospective pilot studies of screening with existing experience of the natural history, diagnosis, and management of the individual disorders concerned.
Kang, James; An, Howard; Hilibrand, Alan; Yoon, S Tim; Kavanagh, Eoin; Boden, Scott
2012-05-20
Prospective multicenter randomized clinical trail. The goal of our 2-year prospective study was to perform a randomized clinical trial comparing the outcomes of Grafton demineralized bone matrix (DBM) Matrix with local bone with that of iliac crest bone graft (ICBG) in a single-level instrumented posterior lumbar fusion. There has been extensive research and development in identifying a suitable substitute to replace autologous ICBG that is associated with known morbidities. DBMs are a class of commercially available grafting agents that are prepared from allograft bone. Many such products have been commercially available for clinical use; however, their efficacy for spine fusion has been mostly based on anecdotal evidence rather than randomized controlled clinical trials. Forty-six patients were randomly assigned (2:1) to receive Grafton DBM Matrix with local bone (30 patients) or autologous ICBG (16 patients). The mean age was 64 (females [F] = 21, males [M] = 9) in the DBM group and 65 (F = 9, M = 5) in the ICBG group. An independent radiologist evaluated plain radiographs and computed tomographic scans at 6-month, 1-year, and 2-year time points. Clinical outcomes were measured using Oswestry Disability Index (ODI) and Medical Outcomes Study 36-Item Short Form Health Survey. Forty-one patients (DBM = 28 and ICBG = 13) completed the 2-year follow-up. Final fusion rates were 86% (Grafton Matrix) versus 92% (ICBG) (P = 1.0 not significant). The Grafton group showed slightly better improvement in ODI score than the ICBG group at the final 2-year follow-up (Grafton [16.2] and ICBG [22.7]); however, the difference was not statistically significant (P = 0.2346 at 24 mo). Grafton showed consistently higher physical function scores at 24 months; however, differences were not statistically significant (P = 0.0823). Similar improvements in the physical component summary scores were seen in both the Grafton and ICBG groups. There was a statistically significant greater mean intraoperative blood loss in the ICBG group than in the Grafton group (P < 0.0031). At 2-year follow-up, subjects who were randomized to Grafton Matrix and local bone achieved an 86% overall fusion rate and improvements in clinical outcomes that were comparable with those in the ICBG group.
Studies on Relaxation Behavior of Corona Poled Aromatic Dipolar Molecules in a Polymer Matrix
1990-08-03
concentration upto 30 weight percent. Orientation As expected optically responsive molecules are randomly oriented in the polymer matrix although a small amount...INSERT Figure 4 The retention of SH intensity of the small molecule such as MNA was found to be very poor in the PMMA matrix while the larger rodlike...Polym. Prepr. Am. Chem. Soc., Div. Polym. Chem. 24(2), 309 (1983). 16.- H. Ringsdorf and H. W. Schmidt. Makromol. Chem. 185, 1327 (1984). 17. S. Musikant
Comprehensive T-matrix Reference Database: A 2009-2011 Update
NASA Technical Reports Server (NTRS)
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
2012-01-01
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
Finite GUE Distribution with Cut-Off at a Shock
NASA Astrophysics Data System (ADS)
Ferrari, P. L.
2018-03-01
We consider the totally asymmetric simple exclusion process with initial conditions generating a shock. The fluctuations of particle positions are asymptotically governed by the randomness around the two characteristic lines joining at the shock. Unlike in previous papers, we describe the correlation in space-time without employing the mapping to the last passage percolation, which fails to exists already for the partially asymmetric model. We then consider a special case, where the asymptotic distribution is a cut-off of the distribution of the largest eigenvalue of a finite GUE matrix. Finally we discuss the strength of the probabilistic and physically motivated approach and compare it with the mathematical difficulties of a direct computation.
2010-08-01
indicateurs de rendement de haut niveau, ce qui donne à penser que la flotte a une réaction inélastique aux chocs de dépenses. Ces résultats révèlent...corrélations entre les données. DRDC CORA TM 2010-168 i Dr aft Co py This page intentionally left blank. ii DRDC CORA TM 2010-168 Dr aft Co py Executive ...celles qui sont observées dans les données ont peu d’effet sur le rendement. Nous pouvons conclure que l’entretien est très robuste ; les chocs de
Random pure states: Quantifying bipartite entanglement beyond the linear statistics.
Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb
2016-05-01
We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.
Remote sensing of earth terrain
NASA Technical Reports Server (NTRS)
Kong, J. A.
1988-01-01
Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackowski, Daniel W.; Mishchenko, Michael I.
The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GBmore » can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.« less
Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.
2013-01-01
In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.
NASA Astrophysics Data System (ADS)
Esmaili, Esmat; Mardaani, Mohammad; Rabani, Hassan
2018-01-01
The electronic transport of a ladder-like graphene nanoribbon which the on-site or hopping energies of a small part of it can be random is modeled by using the Green's function technique within the nearest neighbor tight-binding approach. We employ a unitary transformation in order to convert the Hamiltonian of the nanoribbon to the Hamiltonian of a tight-binding ladder-like network. In this case, the disturbed part of the system includes the second neighbor hopping interactions. While, the converted Hamiltonian of each ideal part is equivalent to the Hamiltonian of two periodic on-site chains. Therefore, we can insert the self-energies of the alternative on-site tight-binding chains to the inverse of the Green's function matrix of the ladder-like part. In this viewpoint, the conductance is constructed from two trans and cis contributions. The results show that increasing the disorder strength causes the increase and decrease of the conductance of the trans and cis contributions, respectively.
A Note on Parameters of Random Substitutions by γ-Diagonal Matrices
NASA Astrophysics Data System (ADS)
Kang, Ju-Sung
Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.
Matrix approach to land carbon cycle modeling: A case study with the Community Land Model.
Huang, Yuanyuan; Lu, Xingjie; Shi, Zheng; Lawrence, David; Koven, Charles D; Xia, Jianyang; Du, Zhenggang; Kluzek, Erik; Luo, Yiqi
2018-03-01
The terrestrial carbon (C) cycle has been commonly represented by a series of C balance equations to track C influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C cycle processes well but makes it difficult to track model behaviors. It is also computationally expensive, limiting the ability to conduct comprehensive parametric sensitivity analyses. To overcome these challenges, we have developed a matrix approach, which reorganizes the C balance equations in the original ESM into one matrix equation without changing any modeled C cycle processes and mechanisms. We applied the matrix approach to the Community Land Model (CLM4.5) with vertically-resolved biogeochemistry. The matrix equation exactly reproduces litter and soil organic carbon (SOC) dynamics of the standard CLM4.5 across different spatial-temporal scales. The matrix approach enables effective diagnosis of system properties such as C residence time and attribution of global change impacts to relevant processes. We illustrated, for example, the impacts of CO 2 fertilization on litter and SOC dynamics can be easily decomposed into the relative contributions from C input, allocation of external C into different C pools, nitrogen regulation, altered soil environmental conditions, and vertical mixing along the soil profile. In addition, the matrix tool can accelerate model spin-up, permit thorough parametric sensitivity tests, enable pool-based data assimilation, and facilitate tracking and benchmarking of model behaviors. Overall, the matrix approach can make a broad range of future modeling activities more efficient and effective. © 2017 John Wiley & Sons Ltd.
Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.
Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles
2009-01-01
Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.
Milenković, Jana; Dalmış, Mehmet Ufuk; Žgajnar, Janez; Platel, Bram
2017-09-01
New ultrafast view-sharing sequences have enabled breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to be performed at high spatial and temporal resolution. The aim of this study is to evaluate the diagnostic potential of textural features that quantify the spatiotemporal changes of the contrast-agent uptake in computer-aided diagnosis of malignant and benign breast lesions imaged with high spatial and temporal resolution DCE-MRI. The proposed approach is based on the textural analysis quantifying the spatial variation of six dynamic features of the early-phase contrast-agent uptake of a lesion's largest cross-sectional area. The textural analysis is performed by means of the second-order gray-level co-occurrence matrix, gray-level run-length matrix and gray-level difference matrix. This yields 35 textural features to quantify the spatial variation of each of the six dynamic features, providing a feature set of 210 features in total. The proposed feature set is evaluated based on receiver operating characteristic (ROC) curve analysis in a cross-validation scheme for random forests (RF) and two support vector machine classifiers, with linear and radial basis function (RBF) kernel. Evaluation is done on a dataset with 154 breast lesions (83 malignant and 71 benign) and compared to a previous approach based on 3D morphological features and the average and standard deviation of the same dynamic features over the entire lesion volume as well as their average for the smaller region of the strongest uptake rate. The area under the ROC curve (AUC) obtained by the proposed approach with the RF classifier was 0.8997, which was significantly higher (P = 0.0198) than the performance achieved by the previous approach (AUC = 0.8704) on the same dataset. Similarly, the proposed approach obtained a significantly higher result for both SVM classifiers with RBF (P = 0.0096) and linear kernel (P = 0.0417) obtaining AUC of 0.8876 and 0.8548, respectively, compared to AUC values of previous approach of 0.8562 and 0.8311, respectively. The proposed approach based on 2D textural features quantifying spatiotemporal changes of the contrast-agent uptake significantly outperforms the previous approach based on 3D morphology and dynamic analysis in differentiating the malignant and benign breast lesions, showing its potential to aid clinical decision making. © 2017 American Association of Physicists in Medicine.
On the statistical mechanics of the 2D stochastic Euler equation
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
Fractal planetary rings: Energy inequalities and random field model
NASA Astrophysics Data System (ADS)
Malyarenko, Anatoliy; Ostoja-Starzewski, Martin
2017-12-01
This study is motivated by a recent observation, based on photographs from the Cassini mission, that Saturn’s rings have a fractal structure in radial direction. Accordingly, two questions are considered: (1) What Newtonian mechanics argument in support of such a fractal structure of planetary rings is possible? (2) What kinematics model of such fractal rings can be formulated? Both challenges are based on taking planetary rings’ spatial structure as being statistically stationary in time and statistically isotropic in space, but statistically nonstationary in space. An answer to the first challenge is given through an energy analysis of circular rings having a self-generated, noninteger-dimensional mass distribution [V. E. Tarasov, Int. J. Mod Phys. B 19, 4103 (2005)]. The second issue is approached by taking the random field of angular velocity vector of a rotating particle of the ring as a random section of a special vector bundle. Using the theory of group representations, we prove that such a field is completely determined by a sequence of continuous positive-definite matrix-valued functions defined on the Cartesian square F2 of the radial cross-section F of the rings, where F is a fat fractal.
Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery
NASA Astrophysics Data System (ADS)
Mahdianpari, Masoud; Salehi, Bahram; Mohammadimanesh, Fariba; Motagh, Mahdi
2017-08-01
Wetlands are important ecosystems around the world, although they are degraded due both to anthropogenic and natural process. Newfoundland is among the richest Canadian province in terms of different wetland classes. Herbaceous wetlands cover extensive areas of the Avalon Peninsula, which are the habitat of a number of animal and plant species. In this study, a novel hierarchical object-based Random Forest (RF) classification approach is proposed for discriminating between different wetland classes in a sub-region located in the north eastern portion of the Avalon Peninsula. Particularly, multi-polarization and multi-frequency SAR data, including X-band TerraSAR-X single polarized (HH), L-band ALOS-2 dual polarized (HH/HV), and C-band RADARSAT-2 fully polarized images, were applied in different classification levels. First, a SAR backscatter analysis of different land cover types was performed by training data and used in Level-I classification to separate water from non-water classes. This was followed by Level-II classification, wherein the water class was further divided into shallow- and deep-water classes, and the non-water class was partitioned into herbaceous and non-herbaceous classes. In Level-III classification, the herbaceous class was further divided into bog, fen, and marsh classes, while the non-herbaceous class was subsequently partitioned into urban, upland, and swamp classes. In Level-II and -III classifications, different polarimetric decomposition approaches, including Cloude-Pottier, Freeman-Durden, Yamaguchi decompositions, and Kennaugh matrix elements were extracted to aid the RF classifier. The overall accuracy and kappa coefficient were determined in each classification level for evaluating the classification results. The importance of input features was also determined using the variable importance obtained by RF. It was found that the Kennaugh matrix elements, Yamaguchi, and Freeman-Durden decompositions were the most important parameters for wetland classification in this study. Using this new hierarchical RF classification approach, an overall accuracy of up to 94% was obtained for classifying different land cover types in the study area.
Optimized Projection Matrix for Compressive Sensing
NASA Astrophysics Data System (ADS)
Xu, Jianping; Pi, Yiming; Cao, Zongjie
2010-12-01
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
The Matrix Analogies Test: A Validity Study with the K-ABC.
ERIC Educational Resources Information Center
Smith, Douglas K.
The Matrix Analogies Test-Expanded Form (MAT-EF) and Kaufman Assessment Battery for Children (K-ABC) were administered in counterbalanced order to two randomly selected samples of students in grades 2 through 5. The MAT-EF was recently developed to measure non-verbal reasoning. The samples included 26 non-handicapped second graders in a rural…
NASA Astrophysics Data System (ADS)
Hamed, Haikel Ben; Bennacer, Rachid
2008-08-01
This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
Localized entrapment of green fluorescent protein within nanostructured polymer films
NASA Astrophysics Data System (ADS)
Ankner, John; Kozlovskaya, Veronika; O'Neill, Hugh; Zhang, Qiu; Kharlampieva, Eugenia
2012-02-01
Protein entrapment within ultrathin polymer films is of interest for applications in biosensing, drug delivery, and bioconversion, but controlling protein distribution within the films is difficult. We report on nanostructured protein/polyelectrolyte (PE) materials obtained through incorporation of green fluorescent protein (GFP) within poly(styrene sulfonate)/poly(allylamine hydrochloride) multilayer films assembled via the spin-assisted layer-by-layer method. By using deuterated GFP as a marker for neutron scattering contrast we have inferred the architecture of the films in both normal and lateral directions. We find that films assembled with a single GFP layer exhibit a strong localization of the GFP without intermixing into the PE matrix. The GFP volume fraction approaches the monolayer density of close-packed randomly oriented GFP molecules. However, intermixing of the GFP with the PE matrix occurs in multiple-GFP layer films. Our results yield new insight into the organization of immobilized proteins within polyelectrolyte matrices and open opportunities for fabrication of protein-containing films with well-organized structure and controllable function, a crucial requirement for advanced sensing applications.
A Transfer Hamiltonian Model for Devices Based on Quantum Dot Arrays
Illera, S.; Prades, J. D.; Cirera, A.; Cornet, A.
2015-01-01
We present a model of electron transport through a random distribution of interacting quantum dots embedded in a dielectric matrix to simulate realistic devices. The method underlying the model depends only on fundamental parameters of the system and it is based on the Transfer Hamiltonian approach. A set of noncoherent rate equations can be written and the interaction between the quantum dots and between the quantum dots and the electrodes is introduced by transition rates and capacitive couplings. A realistic modelization of the capacitive couplings, the transmission coefficients, the electron/hole tunneling currents, and the density of states of each quantum dot have been taken into account. The effects of the local potential are computed within the self-consistent field regime. While the description of the theoretical framework is kept as general as possible, two specific prototypical devices, an arbitrary array of quantum dots embedded in a matrix insulator and a transistor device based on quantum dots, are used to illustrate the kind of unique insight that numerical simulations based on the theory are able to provide. PMID:25879055
A transfer hamiltonian model for devices based on quantum dot arrays.
Illera, S; Prades, J D; Cirera, A; Cornet, A
2015-01-01
We present a model of electron transport through a random distribution of interacting quantum dots embedded in a dielectric matrix to simulate realistic devices. The method underlying the model depends only on fundamental parameters of the system and it is based on the Transfer Hamiltonian approach. A set of noncoherent rate equations can be written and the interaction between the quantum dots and between the quantum dots and the electrodes is introduced by transition rates and capacitive couplings. A realistic modelization of the capacitive couplings, the transmission coefficients, the electron/hole tunneling currents, and the density of states of each quantum dot have been taken into account. The effects of the local potential are computed within the self-consistent field regime. While the description of the theoretical framework is kept as general as possible, two specific prototypical devices, an arbitrary array of quantum dots embedded in a matrix insulator and a transistor device based on quantum dots, are used to illustrate the kind of unique insight that numerical simulations based on the theory are able to provide.
CMV matrices in random matrix theory and integrable systems: a survey
NASA Astrophysics Data System (ADS)
Nenciu, Irina
2006-07-01
We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.
Lee, Jeffrey S; Cleaver, Gerald B
2017-10-01
In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.
The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, David O.
2007-01-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Klein, Julie; Eales, James; Zürbig, Petra; Vlahou, Antonia; Mischak, Harald; Stevens, Robert
2013-04-01
In this study, we have developed Proteasix, an open-source peptide-centric tool that can be used to predict in silico the proteases involved in naturally occurring peptide generation. We developed a curated cleavage site (CS) database, containing 3500 entries about human protease/CS combinations. On top of this database, we built a tool, Proteasix, which allows CS retrieval and protease associations from a list of peptides. To establish the proof of concept of the approach, we used a list of 1388 peptides identified from human urine samples, and compared the prediction to the analysis of 1003 randomly generated amino acid sequences. Metalloprotease activity was predominantly involved in urinary peptide generation, and more particularly to peptides associated with extracellular matrix remodelling, compared to proteins from other origins. In comparison, random sequences returned almost no results, highlighting the specificity of the prediction. This study provides a tool that can facilitate linking of identified protein fragments to predicted protease activity, and therefore into presumed mechanisms of disease. Experiments are needed to confirm the in silico hypotheses; nevertheless, this approach may be of great help to better understand molecular mechanisms of disease, and define new biomarkers, and therapeutic targets. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Acellular dermal matrix allograft. The results of controlled randomized clinical studies.
Novaes, Arthur Belém; de Barros, Raquel Rezende Martins
2008-10-01
The aim of this presentation was to discuss the effectiveness of the acellular dermal matrix in root coverage therapy and in alveolar ridge augmentation, based on three controlled randomized clinical trials conducted by our research team (Novaes Jr et al., 2001; Barros et al., 2005; Luczyszyn et al., 2005). The first and second studies highlight the allograft's performance in the treatment of gingival recession. In both studies, clinical parameters were assessed prior to surgery and 6 or 12 months post-surgery. The first one compared the use of the acellular dermal matrix with the subepithelial connective tissue graft and showed 1.83 and 2.10 mm of recession reduction, respectively. Because no statistically significant differences between the groups were observed, it was concluded that the allograft can be used as a substitute for the autograft. In the second study, a new surgical approach was compared to a conventional surgical procedure described by Langer and Langer in 1985. A statistically significant greater recession reduction favoring the test procedure was achieved. The percentage of root coverage was 82.5% and 62.3% for test and control groups. Thus the new technique was considered more suitable for the treatment of gingival recessions with the allograft. Finally, the third study evaluated the allograft as a membrane, associated or not with a resorbable hydroxyapatite in bone regeneration to prevent ridge deformities. In one group the extraction sockets were covered only by the allograft and in the other, the alveoli were also filled with the resorbable hydroxyapatite. After six months, both treatments were able to preserve ridge thickness, considering the pre-operative values. In conclusion, no adverse healing events were noted with the use of allograft in site preservation procedures, and sites treated with the combination of allograft plus resorbable hydroxyapatite showed significantly greater ridge thickness preservation at six months when compared to sites treated with allograft alone (6.8 +/- 1.26 and 5.53 +/- 1.06 respectively).
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
NASA Astrophysics Data System (ADS)
Tatlier, Mehmet Seha
Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.
Free Vibration of Uncertain Unsymmetrically Laminated Beams
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Goyal, Vijay K.
2001-01-01
Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.
Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip
2007-04-01
In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.
Dry Arthroscopy With a Retraction System for Matrix-Aided Cartilage Repair of Patellar Lesions
Sadlik, Boguslaw; Wiewiorski, Martin
2014-01-01
Several commercially available cartilage repair techniques use a natural or synthetic matrix to aid cartilage regeneration (e.g., autologous matrix–induced chondrogenesis or matrix-induced cartilage implantation). However, the use of matrix-aided techniques during conventional knee joint arthroscopy under continuous irrigation is challenging. Insertion and fixation of the matrix can be complicated by the presence of fluid and the confined patellofemoral joint space with limited access to the lesion. To overcome these issues, we developed a novel arthroscopic approach for matrix-aided cartilage repair of patellar lesions. This technical note describes the use of dry arthroscopy assisted by a minimally invasive retraction system. An autologous matrix–induced chondrogenesis procedure is used to illustrate this novel approach. PMID:24749035
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
Embedded random matrix ensembles from nuclear structure and their recent applications
NASA Astrophysics Data System (ADS)
Kota, V. K. B.; Chavda, N. D.
Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.
A Problem-Centered Approach to Canonical Matrix Forms
ERIC Educational Resources Information Center
Sylvestre, Jeremy
2014-01-01
This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…
Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices
NASA Astrophysics Data System (ADS)
Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.
2017-08-01
We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.
Li, Jia; Wu, Pinghui; Chang, Liping
2015-08-24
Within the accuracy of the first-order Born approximation, sufficient conditions are derived for the invariance of spectrum of an electromagnetic wave, which is generated by the scattering of an electromagnetic plane wave from an anisotropic random media. We show that the following restrictions on properties of incident fields and the anisotropic media must be simultaneously satisfied: 1) the elements of the dielectric susceptibility matrix of the media must obey the scaling law; 2) the spectral components of the incident field are proportional to each other; 3) the second moments of the elements of the dielectric susceptibility matrix of the media are inversely proportional to the frequency.
Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.
ERIC Educational Resources Information Center
Steinberg, Esther R.; And Others
This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…
NASA Astrophysics Data System (ADS)
Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.
2017-10-01
Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.
NASA Astrophysics Data System (ADS)
Zhernov, Evgeny; Nehoda, Evgenia
2017-11-01
The state, regional and industry approaches to the problem of personnel training for building an innovative knowledge economy at all levels that ensures sustainable development of the region are analyzed in the article using the cases of the Kemerovo region and the coal industry. A new regional-matrix approach to the training of highly qualified personnel is proposed, which allows to link the training systems with the regional economic matrix "natural resources - cognitive resources" developed by the author. A special feature of the new approach is the consideration of objective conditions and contradictions of regional systems of personnel training, which have formed as part of economic systems of regions differ-entiated in the matrix. The methodology of the research is based on the statement about the interconnectivity of general and local knowledge, from which the understanding of the need for a combination of regional, indus-try and state approaches to personnel training is derived. A new form of representing such a combination is the proposed approach, which is based on matrix analysis. The results of the research can be implemented in the practice of modernization of professional education of workers in the coal industry of the natural resources extractive region.
Comparing the structure of an emerging market with a mature one under global perturbation
NASA Astrophysics Data System (ADS)
Namaki, A.; Jafari, G. R.; Raei, R.
2011-09-01
In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.
Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li
2016-01-01
The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.
NASA Astrophysics Data System (ADS)
Lee, Gibbeum; Cho, Yeunwoo
2017-11-01
We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).
NASA Astrophysics Data System (ADS)
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-01
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.
Connective tissue graft vs. emdogain: A new approach to compare the outcomes.
Sayar, Ferena; Akhundi, Nasrin; Gholami, Sanaz
2013-01-01
The aim of this clinical trial study was to clinically evaluate the use of enamel matrix protein derivative combined with the coronally positioned flap to treat gingival recession compared to the subepithelial connective tissue graft by a new method to obtain denuded root surface area. Thirteen patients, each with two or more similar bilateral Miller class I or II gingival recession (40 recessions) were randomly assigned to the test (enamel matrix protein derivative + coronally positioned flap) or control group (subepithelial connective tissue graft). Recession depth, width, probing depth, keratinized gingival, and plaque index were recorded at baseline and at one, three, and six months after treatment. A stent was used to measure the denuded root surface area at each examination session. Results were analyzed using Kolmogorov-Smirnov, Wilcoxon, Friedman, paired-sample t test. The average percentages of root coverage for control and test groups were 63.3% and 55%, respectively. Both groups showed significant keratinized gingival increase (P < 0.05). Recession depth decreased significantly in both groups. Root surface area was improved significantly from baseline with no significant difference between the two study groups (P > 0.05). The results of Friedman test were significant for clinical indices (P < 0.05), except for probing depth in control group (P = 0.166). Enamel matrix protein derivative showed the same results as subepithelial connective tissue graft with relatively easy procedure to perform and low patient morbidity.
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-15
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic Textures Modeling via Joint Video Dictionary Learning.
Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng
2017-04-06
Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.
Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F
2011-05-20
Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.
Comprehensive T-Matrix Reference Database: A 2007-2009 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2010-01-01
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
Least-squares analysis of the Mueller matrix.
Reimer, Michael; Yevick, David
2006-08-15
In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detwiler, Russell
Matrix diffusion and adsorption within a rock matrix are widely regarded as important mechanisms for retarding the transport of radionuclides and other solutes in fractured rock (e.g., Neretnieks, 1980; Tang et al., 1981; Maloszewski and Zuber, 1985; Novakowski and Lapcevic, 1994; Jardine et al., 1999; Zhou and Xie, 2003; Reimus et al., 2003a,b). When remediation options are being evaluated for old sources of contamination, where a large fraction of contaminants reside within the rock matrix, slow diffusion out of the matrix greatly increases the difficulty and timeframe of remediation. Estimating the rates of solute exchange between fractures and the adjacentmore » rock matrix is a critical factor in quantifying immobilization and/or remobilization of DOE-relevant contaminants within the subsurface. In principle, the most rigorous approach to modeling solute transport with fracture-matrix interaction would be based on local-scale coupled advection-diffusion/dispersion equations for the rock matrix and in discrete fractures that comprise the fracture network (Discrete Fracture Network and Matrix approach, hereinafter referred to as DFNM approach), fully resolving aperture variability in fractures and matrix property heterogeneity. However, such approaches are computationally demanding, and thus, many predictive models rely upon simplified models. These models typically idealize fracture rock masses as a single fracture or system of parallel fractures interacting with slabs of porous matrix or as a mobile-immobile or multi-rate mass transfer system. These idealizations provide tractable approaches for interpreting tracer tests and predicting contaminant mobility, but rely upon a fitted effective matrix diffusivity or mass-transfer coefficients. However, because these fitted parameters are based upon simplified conceptual models, their effectiveness at predicting long-term transport processes remains uncertain. Evidence of scale dependence of effective matrix diffusion coefficients obtained from tracer tests highlights this point and suggests that the underlying mechanisms and relationship between rock and fracture properties are not fully understood in large complex fracture networks. In this project, we developed a high-resolution DFN model of solute transport in fracture networks to explore and quantify the mechanisms that control transport in complex fracture networks and how these may give rise to observed scale-dependent matrix diffusion coefficients. Results demonstrate that small scale heterogeneity in the flow field caused by local aperture variability within individual fractures can lead to long-tailed breakthrough curves indicative of matrix diffusion, even in the absence of interactions with the fracture matrix. Furthermore, the temporal and spatial scale dependence of these processes highlights the inability of short-term tracer tests to estimate transport parameters that will control long-term fate and transport of contaminants in fractured aquifers.« less
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-01-01
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.
Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-11-08
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .
The matrix approach to mental health care: Experiences in Florianopolis, Brazil.
Soares, Susana; de Oliveira, Walter Ferreira
2016-03-01
This article reports on the experience of a matrix approach to mental health in primary health care. Professionals who work in the Family Health Support Nuclei, Núcleos de Apoio à Saúde da Família, pointed to challenges of this approach, especially regarding the difficulties of introducing pedagogic actions in the health field and problems related to work relationships. As the matrix approach and its practice are new aspects of the Brazilian Unified Health System, the academic knowledge must walk hand in hand with everyday professional practice to help improve the quality of the services offered in this context. © The Author(s) 2016.
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Continuum modeling of large lattice structures: Status and projections
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Mikulas, Martin M., Jr.
1988-01-01
The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
Fatigue loading history reconstruction based on the rain-flow technique
NASA Technical Reports Server (NTRS)
Khosrovaneh, A. K.; Dowling, N. E.
1989-01-01
Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composites Behavior
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Chamis, Christos C.; Mital, Subodh K.
1996-01-01
This report describes a methodology which predicts the behavior of ceramic matrix composites and has been incorporated in the computational tool CEMCAN (CEramic Matrix Composite ANalyzer). The approach combines micromechanics with a unique fiber substructuring concept. In this new concept, the conventional unit cell (the smallest representative volume element of the composite) of the micromechanics approach is modified by substructuring it into several slices and developing the micromechanics-based equations at the slice level. The methodology also takes into account nonlinear ceramic matrix composite (CMC) behavior due to temperature and the fracture initiation and progression. Important features of the approach and its effectiveness are described by using selected examples. Comparisons of predictions and limited experimental data are also provided.
Iterative approach as alternative to S-matrix in modal methods
NASA Astrophysics Data System (ADS)
Semenikhin, Igor; Zanuccoli, Mauro
2014-12-01
The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Coherent Patterns in Nuclei and in Financial Markets
NASA Astrophysics Data System (ADS)
DroŻdŻ, S.; Kwapień, J.; Speth, J.
2010-07-01
In the area of traditional physics the atomic nucleus belongs to the most complex systems. It involves essentially all elements that characterize complexity including the most distinctive one whose essence is a permanent coexistence of coherent patterns and of randomness. From a more interdisciplinary perspective, these are the financial markets that represent an extreme complexity. Here, based on the matrix formalism, we set some parallels between several characteristics of complexity in the above two systems. We, in particular, refer to the concept—historically originating from nuclear physics considerations—of the random matrix theory and demonstrate its utility in quantifying characteristics of the coexistence of chaos and collectivity also for the financial markets. In this later case we show examples that illustrate mapping of the matrix formulation into the concepts originating from the graph theory. Finally, attention is drawn to some novel aspects of the financial coherence which opens room for speculation if analogous effects can be detected in the atomic nuclei or in other strongly interacting Fermi systems.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa
2015-05-01
Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Ning; Zhao, Juan; Hanson, Steen G.; Takeda, Mitsuo; Wang, Wei
2016-10-01
Laser speckle has received extensive studies of its basic properties and associated applications. In the majority of research on speckle phenomena, the random optical field has been treated as a scalar optical field, and the main interest has been concentrated on their statistical properties and applications of its intensity distribution. Recently, statistical properties of random electric vector fields referred to as Polarization Speckle have come to attract new interest because of their importance in a variety of areas with practical applications such as biomedical optics and optical metrology. Statistical phenomena of random electric vector fields have close relevance to the theories of speckles, polarization and coherence theory. In this paper, we investigate the correlation tensor for stochastic electromagnetic fields modulated by a depolarizer consisting of a rough-surfaced retardation plate. Under the assumption that the microstructure of the scattering surface on the depolarizer is as fine as to be unresolvable in our observation region, we have derived a relationship between the polarization matrix/coherency matrix for the modulated electric fields behind the rough-surfaced retardation plate and the coherence matrix under the free space geometry. This relation is regarded as entirely analogous to the van Cittert-Zernike theorem of classical coherence theory. Within the paraxial approximation as represented by the ABCD-matrix formalism, the three-dimensional structure of the generated polarization speckle is investigated based on the correlation tensor, indicating a typical carrot structure with a much longer axial dimension than the extent in its transverse dimension.
Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry
1987-01-01
Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.
van Grootel, Leonie; van Wesel, Floryt; O'Mara-Eves, Alison; Thomas, James; Hox, Joop; Boeije, Hennie
2017-09-01
This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gong, Ming; Hofer, B.; Zallo, E.; Trotta, R.; Luo, Jun-Wei; Schmidt, O. G.; Zhang, Chuanwei
2014-05-01
We develop an effective model to describe the statistical properties of exciton fine structure splitting (FSS) and polarization angle in quantum dot ensembles (QDEs) using only a few symmetry-related parameters. The connection between the effective model and the random matrix theory is established. Such effective model is verified both theoretically and experimentally using several rather different types of QDEs, each of which contains hundreds to thousands of QDs. The model naturally addresses three fundamental issues regarding the FSS and polarization angels of QDEs, which are frequently encountered in both theories and experiments. The answers to these fundamental questions yield an approach to characterize the optical properties of QDEs. Potential applications of the effective model are also discussed.
Index theorem and universality properties of the low-lying eigenvalues of improved staggered quarks.
Follana, E; Hart, A; Davies, C T H
2004-12-10
We study various improved staggered quark Dirac operators on quenched gluon backgrounds in lattice QCD generated using a Symanzik-improved gluon action. We find a clear separation of the spectrum into would-be zero modes and others. The number of would-be zero modes depends on the topological charge as expected from the index theorem, and their chirality expectation value is large ( approximately 0.7). The remaining modes have low chirality and show clear signs of clustering into quartets and approaching the random matrix theory predictions for all topological charge sectors. We conclude that improvement of the fermionic and gauge actions moves the staggered quarks closer to the continuum limit where they respond correctly to QCD topology.
Authentication via wavefront-shaped optical responses
NASA Astrophysics Data System (ADS)
Eilers, Hergen; Anderson, Benjamin R.; Gunawidjaja, Ray
2018-02-01
Authentication/tamper-indication is required in a wide range of applications, including nuclear materials management and product counterfeit detection. State-of-the-art techniques include reflective particle tags, laser speckle authentication, and birefringent seals. Each of these passive techniques has its own advantages and disadvantages, including the need for complex image comparisons, limited flexibility, sensitivity to environmental conditions, limited functionality, etc. We have developed a new active approach to address some of these short-comings. The use of an active characterization technique adds more flexibility and additional layers of security over current techniques. Our approach uses randomly-distributed nanoparticles embedded in a polymer matrix (tag/seal) which is attached to the item to be secured. A spatial light modulator is used to adjust the wavefront of a laser which interacts with the tag/seal, and a detector is used to monitor this interaction. The interaction can occur in various ways, including transmittance, reflectance, fluorescence, random lasing, etc. For example, at the time of origination, the wavefront-shaped reflectance from a tag/seal can be adjusted to result in a specific pattern (symbol, words, etc.) Any tampering with the tag/seal would results in a disturbance of the random orientation of the nanoparticles and thus distort the reflectance pattern. A holographic waveplate could be inserted into the laser beam for verification. The absence/distortion of the original pattern would then indicate that tampering has occurred. We have tested the tag/seal's and authentication method's tamper-indicating ability using various attack methods, including mechanical, thermal, and chemical attacks, and have verified our material/method's robust tamper-indicating ability.
Carbon nanotubes within polymer matrix can synergistically enhance mechanical energy dissipation
NASA Astrophysics Data System (ADS)
Ashraf, Taimoor; Ranaiefar, Meelad; Khatri, Sumit; Kavosi, Jamshid; Gardea, Frank; Glaz, Bryan; Naraghi, Mohammad
2018-03-01
Safe operation and health of structures relies on their ability to effectively dissipate undesired vibrations, which could otherwise significantly reduce the life-time of a structure due to fatigue loads or large deformations. To address this issue, nanoscale fillers, such as carbon nanotubes (CNTs), have been utilized to dissipate mechanical energy in polymer-based nanocomposites through filler-matrix interfacial friction by benefitting from their large interface area with the matrix. In this manuscript, for the first time, we experimentally investigate the effect of CNT alignment with respect to reach other and their orientation with respect to the loading direction on vibrational damping in nanocomposites. The matrix was polystyrene (PS). A new technique was developed to fabricate PS-CNT nanocomposites which allows for controlling the angle of CNTs with respect to the far-field loading direction (misalignment angle). Samples were subjected to dynamic mechanical analysis, and the damping of the samples were measured as the ratio of the loss to storage moduli versus CNT misalignment angle. Our results defied a notion that randomly oriented CNT nanocomposites can be approximated as a combination of matrix-CNT representative volume elements with randomly aligned CNTs. Instead, our results points to major contributions of stress concentration induced by each CNT in the matrix in proximity of other CNTs on vibrational damping. The stress fields around CNTs in PS-CNT nanocomposites were studied via finite element analysis. Our findings provide significant new insights not only on vibrational damping nanocomposites, but also on their failure modes and toughness, in relation to interface phenomena.
Learning Circulant Sensing Kernels
2014-03-01
Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
Pituskin, Edith; Haykowsky, Mark; Mackey, John R; Thompson, Richard B; Ezekowitz, Justin; Koshman, Sheri; Oudit, Gavin; Chow, Kelvin; Pagano, Joseph J; Paterson, Ian
2011-07-27
MANTICORE 101 - Breast (Multidisciplinary Approach to Novel Therapies in Cardiology Oncology Research) is a randomized trial to determine if conventional heart failure pharmacotherapy (angiotensin converting enzyme inhibitor or beta-blocker) can prevent trastuzumab-mediated left ventricular remodeling, measured with cardiac MRI, among patients with HER2+ early breast cancer. One hundred and fifty-nine patients with histologically confirmed HER2+ breast cancer will be enrolled in a parallel 3-arm, randomized, placebo controlled, double-blind design. After baseline assessments, participants will be randomized in a 1:1:1 ratio to an angiotensin-converting enzyme inhibitor (perindopril), beta-blocker (bisoprolol), or placebo. Participants will receive drug or placebo for 1 year beginning 7 days before trastuzumab therapy. Dosages for all groups will be systematically up-titrated, as tolerated, at 1 week intervals for a total of 3 weeks. The primary objective of this randomized clinical trial is to determine if conventional heart failure pharmacotherapy can prevent trastuzumab-mediated left ventricular remodeling among patients with HER2+ early breast cancer, as measured by 12 month change in left ventricular end-diastolic volume using cardiac MRI. Secondary objectives include 1) determine the evolution of left ventricular remodeling on cardiac MRI in patients with HER2+ early breast cancer, 2) understand the mechanism of trastuzumab mediated cardiac toxicity by assessing for the presence of myocardial injury and apoptosis on serum biomarkers and cardiac MRI, and 3) correlate cardiac biomarkers of myocyte injury and extra-cellular matrix remodeling with left ventricular remodeling on cardiac MRI in patients with HER2+ early breast cancer. Cardiac toxicity as a result of cancer therapies is now recognized as a significant health problem of increasing prevalence. To our knowledge, MANTICORE will be the first randomized trial testing proven heart failure pharmacotherapy in the prevention of trastuzumab-mediated cardiotoxicity. We expect the findings of this trial to provide important evidence in the development of guidelines for preventive therapy. ClinicalTrials.gov: NCT01016886.
Corrêa, Mônica G; Gomes Campos, Mirella L; Marques, Marcelo Rocha; Bovi Ambrosano, Glaucia Maria; Casati, Marcio Z; Nociti, Francisco H; Sallum, Enilson A
2014-07-01
Psychologic stress and clinical hypercortisolism have been related to direct effects on bone metabolism. However, there is a lack of information regarding the outcomes of regenerative approaches under the influence of chronic stress (CS). Enamel matrix derivative (EMD) has been used in periodontal regenerative procedures, resulting in improvement of clinical parameters. Thus, the aim of this histomorphometric study is to evaluate the healing of periodontal defects after treatment with EMD under the influence of CS in the rat model. Twenty Wistar rats were randomly assigned to two groups; G1: CS (restraint stress for 12 hours/day) (n = 10), and G2: not exposed to CS (n = 10). Fifteen days after initiation of CS, fenestration defects were created at the buccal aspect of the first mandibular molar of all animals from both groups. After the surgeries, the defects of each animal were randomly assigned to two subgroups: non-treated control and treated with EMD. The animals were euthanized 21 days later. G1 showed less bone density (BD) compared to G2. EMD provided an increased defect fill (DF) in G1 and higher BD and new cementum formation (NCF) in both groups. The number of tartrate-resistant acid phosphatase-positive osteoclasts was significantly higher in G1 when compared to G2 and in EMD-treated sites of both groups. CS may produce a significant detrimental effect on BD. EMD may provide greater DF compared to non-treated control in the presence of CS and increased BD and NCF in the presence or absence of CS.
User-Friendly Tools for Random Matrices: An Introduction
2012-12-03
T 2011 , Oliveira 2010, Mackey et al . 2012, ... Joel A. Tropp, User-Friendly Tools for Random Matrices, NIPS, 3 December 2012 47 To learn more... E...the matrix product Y = AΩ 3. Construct an orthonormal basis Q for the range of Y [Ref] Halko –Martinsson–T, SIAM Rev. 2011 . Joel A. Tropp, User-Friendly...concentration inequalities...” with L. Mackey et al .. Submitted 2012. § “User-Friendly Tools for Random Matrices: An Introduction.” 2012. See also
Molecular selection in a unified evolutionary sequence
NASA Technical Reports Server (NTRS)
Fox, S. W.
1986-01-01
With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.
A new approach for modeling composite materials
NASA Astrophysics Data System (ADS)
Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.
2013-03-01
The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms.
Franchini, Fabio; Kravtsov, Vladimir E
2009-10-16
We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connection between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.
Vertices cannot be hidden from quantum spatial search for almost all random graphs
NASA Astrophysics Data System (ADS)
Glos, Adam; Krawiec, Aleksandra; Kukulski, Ryszard; Puchała, Zbigniew
2018-04-01
In this paper, we show that all nodes can be found optimally for almost all random Erdős-Rényi G(n,p) graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires p=ω (log ^8(n)/n), while the second requires p≥ (1+ɛ )log (n)/n, where ɛ >0. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the \\Vert \\cdot \\Vert _∞ norm. At the same time for p<(1-ɛ )log (n)/n, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
NASA Astrophysics Data System (ADS)
Kota, V. K. B.
2003-07-01
Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.
Matrix approaches to assess terrestrial nitrogen scheme in CLM4.5
NASA Astrophysics Data System (ADS)
Du, Z.
2017-12-01
Terrestrial carbon (C) and nitrogen (N) cycles have been commonly represented by a series of balance equations to track their influxes into and effluxes out of individual pools in earth system models (ESMs). This representation matches our understanding of C and N cycle processes well but makes it difficult to track model behaviors. To overcome these challenges, we developed a matrix approach, which reorganizes the series of terrestrial C and N balance equations in the CLM4.5 into two matrix equations based on original representation of C and N cycle processes and mechanisms. The matrix approach would consequently help improve the comparability of models and data, evaluate impacts of additional model components, facilitate benchmark analyses, model intercomparisons, and data-model fusion, and improve model predictive power.
Cluster structure in the correlation coefficient matrix can be characterized by abnormal eigenvalues
NASA Astrophysics Data System (ADS)
Nie, Chun-Xiao
2018-02-01
In a large number of previous studies, the researchers found that some of the eigenvalues of the financial correlation matrix were greater than the predicted values of the random matrix theory (RMT). Here, we call these eigenvalues as abnormal eigenvalues. In order to reveal the hidden meaning of these abnormal eigenvalues, we study the toy model with cluster structure and find that these eigenvalues are related to the cluster structure of the correlation coefficient matrix. In this paper, model-based experiments show that in most cases, the number of abnormal eigenvalues of the correlation matrix is equal to the number of clusters. In addition, empirical studies show that the sum of the abnormal eigenvalues is related to the clarity of the cluster structure and is negatively correlated with the correlation dimension.
A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.
Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin
2014-01-01
This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Comprehensive Thematic T-Matrix Reference Database: A 2014-2015 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2015-01-01
The T-matrix method is one of the most versatile and efficient direct computer solvers of the macroscopic Maxwell equations and is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper is the seventh update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists a number of earlier publications overlooked previously.
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2016-01-01
In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.
Some Methods for Evaluating Program Implementation.
ERIC Educational Resources Information Center
Hardy, Roy A.
An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…
Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix
Pigni, M. T.; Croft, S.; Gauld, I. C.
2016-04-25
Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less
NASA Astrophysics Data System (ADS)
Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.
2018-02-01
We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.
Matrix Design: An Alternative Model for Organizing the School or Department.
ERIC Educational Resources Information Center
Salem, Philip J.; Gratz, Robert D.
1984-01-01
Explains the matrix organizational structure and describes conditions or pressures that lead an administrator to consider the matrix approach. Provides examples of how it operates in a department or school. (PD)
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas
2015-06-16
The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.
Discovering cell types in flow cytometry data with random matrix theory
NASA Astrophysics Data System (ADS)
Shen, Yang; Nussenblatt, Robert; Losert, Wolfgang
Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. Department of Physics, University of Maryland, College Park, MD 20742.
NASA Astrophysics Data System (ADS)
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
Alves, Luciana B; Costa, Priscila P; Scombatti de Souza, Sérgio Luís; de Moraes Grisi, Márcio F; Palioto, Daniela B; Taba, Mario; Novaes, Arthur B
2012-04-01
The aim of this randomized controlled clinical study was to compare the use of an acellular dermal matrix graft (ADMG) with or without the enamel matrix derivative (EMD) in smokers to evaluate which procedure would provide better root coverage. Nineteen smokers with bilateral Miller Class I or II gingival recessions ≥3 mm were selected. The test group was treated with an association of ADMG and EMD, and the control group with ADMG alone. Probing depth, relative clinical attachment level, gingival recession height, gingival recession width, keratinized tissue width and keratinized tissue thickness were evaluated before the surgeries and after 6 months. Wilcoxon test was used for the statistical analysis at significance level of 5%. No significant differences were found between groups in all parameters at baseline. The mean gain recession height between baseline and 6 months and the complete root coverage favored the test group (p = 0.042, p = 0.019 respectively). Smoking may negatively affect the results achieved through periodontal plastic procedures; however, the association of ADMG and EMD is beneficial in the root coverage of gingival recessions in smokers, 6 months after the surgery. © 2012 John Wiley & Sons A/S.
Neutrinoless double-β decay of 48Ca in the shell model: Closure versus nonclosure approximation
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.
2013-12-01
Neutrinoless double-β decay (0νββ) is a unique process that could reveal physics beyond the Standard Model. Essential ingredients in the analysis of 0νββ rates are the associated nuclear matrix elements. Most of the approaches used to calculate these matrix elements rely on the closure approximation. Here we analyze the light neutrino-exchange matrix elements of 48Ca 0νββ decay and test the closure approximation in a shell-model approach. We calculate the 0νββ nuclear matrix elements for 48Ca using both the closure approximation and a nonclosure approach, and we estimate the uncertainties associated with the closure approximation. We demonstrate that the nonclosure approach has excellent convergence properties which allow us to avoid unmanageable computational cost. Combining the nonclosure and closure approaches we propose a new method of calculation for 0νββ decay rates which can be applied to the 0νββ decay rates of heavy nuclei, such as 76Ge or 82Se.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2014-01-01
A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.
Fluctuation theorems for discrete kinetic models of molecular motors
NASA Astrophysics Data System (ADS)
Faggionato, Alessandra; Silvestri, Vittoria
2017-04-01
Motivated by discrete kinetic models for non-cooperative molecular motors on periodic tracks, we consider random walks (also not Markov) on quasi one dimensional (1d) lattices, obtained by gluing several copies of a fundamental graph in a linear fashion. We show that, for a suitable class of quasi-1d lattices, the large deviation rate function associated to the position of the walker satisfies a Gallavotti-Cohen symmetry for any choice of the dynamical parameters defining the stochastic walk. This class includes the linear model considered in Lacoste et al (2008 Phys. Rev. E 78 011915). We also derive fluctuation theorems for the time-integrated cycle currents and discuss how the matrix approach of Lacoste et al (2008 Phys. Rev. E 78 011915) can be extended to derive the above Gallavotti-Cohen symmetry for any Markov random walk on {Z} with periodic jump rates. Finally, we review in the present context some large deviation results of Faggionato and Silvestri (2017 Ann. Inst. Henri Poincaré 53 46-78) and give some specific examples with explicit computations.
A Delphi-matrix approach to SEA and its application within the tourism sector in Taiwan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, N.-W.; Hsiao, T.-Y.; Yu, Y.-H.
Strategic Environmental Assessment (SEA) is a procedural tool and within the framework of SEA, several different types of analytical methods can be used in the assessment. However, the impact matrix used currently in Taiwan has some disadvantages. Hence, a Delphi-matrix approach to SEA is proposed here to improve the performance of Taiwan's SEA. This new approach is based on the impact matrix combination with indicators of sustainability, and then the Delphi method is employed to collect experts' opinions. In addition, the assessment of National Floriculture Park Plan and Taiwan Flora 2008 Program is taken as an example to examine thismore » new method. Although international exhibition is one of the important tourism (economic) activities, SEA is seldom about tourism sector. Finally, the Delphi-matrix approach to SEA for tourism development plan is established containing eight assessment topics and 26 corresponding categories. In summary, three major types of impacts: resources' usages, pollution emissions, and local cultures change are found. Resources' usages, such as water, electricity, and natural gas demand, are calculated on a per capita basis. Various forms of pollution resulting from this plan, such as air, water, soil, waste, and noise, are also identified.« less
Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi
2017-02-16
The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor.
Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi
2017-01-01
The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor. PMID:28212347
Yu, Hui; Mao, Kui-Tao; Shi, Jian-Yu; Huang, Hua; Chen, Zhi; Dong, Kai; Yiu, Siu-Ming
2018-04-11
Drug-drug interactions (DDIs) always cause unexpected and even adverse drug reactions. It is important to identify DDIs before drugs are used in the market. However, preclinical identification of DDIs requires much money and time. Computational approaches have exhibited their abilities to predict potential DDIs on a large scale by utilizing pre-market drug properties (e.g. chemical structure). Nevertheless, none of them can predict two comprehensive types of DDIs, including enhancive and degressive DDIs, which increases and decreases the behaviors of the interacting drugs respectively. There is a lack of systematic analysis on the structural relationship among known DDIs. Revealing such a relationship is very important, because it is able to help understand how DDIs occur. Both the prediction of comprehensive DDIs and the discovery of structural relationship among them play an important guidance when making a co-prescription. In this work, treating a set of comprehensive DDIs as a signed network, we design a novel model (DDINMF) for the prediction of enhancive and degressive DDIs based on semi-nonnegative matrix factorization. Inspiringly, DDINMF achieves the conventional DDI prediction (AUROC = 0.872 and AUPR = 0.605) and the comprehensive DDI prediction (AUROC = 0.796 and AUPR = 0.579). Compared with two state-of-the-art approaches, DDINMF shows it superiority. Finally, representing DDIs as a binary network and a signed network respectively, an analysis based on NMF reveals crucial knowledge hidden among DDIs. Our approach is able to predict not only conventional binary DDIs but also comprehensive DDIs. More importantly, it reveals several key points about the DDI network: (1) both binary and signed networks show fairly clear clusters, in which both drug degree and the difference between positive degree and negative degree show significant distribution; (2) the drugs having large degrees tend to have a larger difference between positive degree and negative degree; (3) though the binary DDI network contains no information about enhancive and degressive DDIs at all, it implies some of their relationship in the comprehensive DDI matrix; (4) the occurrence of signs indicating enhancive and degressive DDIs is not random because the comprehensive DDI network is equipped with a structural balance.
Lisewski, Andreas Martin; Lichtarge, Olivier
2010-01-01
Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the relative margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network's Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields. PMID:20625477
NASA Astrophysics Data System (ADS)
Lisewski, Andreas Martin; Lichtarge, Olivier
2010-08-01
Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network’s Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields.
Lisewski, Andreas Martin; Lichtarge, Olivier
2010-08-15
Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the relative margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network's Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields.
LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*
Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.
2014-01-01
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
New stimulation pattern design to improve P300-based matrix speller performance at high flash rate
NASA Astrophysics Data System (ADS)
Polprasert, Chantri; Kukieattikool, Pratana; Demeechai, Tanee; Ritcey, James A.; Siwamogsatham, Siwaruk
2013-06-01
Objective. We propose a new stimulation pattern design for the P300-based matrix speller aimed at increasing the minimum target-to-target interval (TTI). Approach. Inspired by the simplicity and strong performance of the conventional row-column (RC) stimulation, the proposed stimulation is obtained by modifying the RC stimulation through alternating row and column flashes which are selected based on the proposed design rules. The second flash of the double-flash components is then delayed for a number of flashing instants to increase the minimum TTI. The trade-off inherited in this approach is the reduced randomness within the stimulation pattern. Main results. We test the proposed stimulation pattern and compare its performance in terms of selection accuracy, raw and practical bit rates with the conventional RC flashing paradigm over several flash rates. By increasing the minimum TTI within the stimulation sequence, the proposed stimulation has more event-related potentials that can be identified compared to that of the conventional RC stimulations, as the flash rate increases. This leads to significant performance improvement in terms of the letter selection accuracy, the raw and practical bit rates over the conventional RC stimulation. Significance. These studies demonstrate that significant performance improvement over the RC stimulation is obtained without additional testing or training samples to compensate for low P300 amplitude at high flash rate. We show that our proposed stimulation is more robust to reduced signal strength due to the increased flash rate than the RC stimulation.
Structure-Function Network Mapping and Its Assessment via Persistent Homology
2017-01-01
Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127
EFFECTS OF CHRONIC STRESS ON WILDLIFE POPULATIONS: A POPULATION MODELING APPROACH AND CASE STUDY
This chapter describes a matrix modeling approach to characterize and project risks to wildlife populations subject to chronic stress. Population matrix modeling was used to estimate effects of one class of environmental contaminants, dioxin-like compounds (DLCs), to populations ...
Matrix factorization-based data fusion for gene function prediction in baker's yeast and slime mold.
Zitnik, Marinka; Zupan, Blaž
2014-01-01
The development of effective methods for the characterization of gene functions that are able to combine diverse data sources in a sound and easily-extendible way is an important goal in computational biology. We have previously developed a general matrix factorization-based data fusion approach for gene function prediction. In this manuscript, we show that this data fusion approach can be applied to gene function prediction and that it can fuse various heterogeneous data sources, such as gene expression profiles, known protein annotations, interaction and literature data. The fusion is achieved by simultaneous matrix tri-factorization that shares matrix factors between sources. We demonstrate the effectiveness of the approach by evaluating its performance on predicting ontological annotations in slime mold D. discoideum and on recognizing proteins of baker's yeast S. cerevisiae that participate in the ribosome or are located in the cell membrane. Our approach achieves predictive performance comparable to that of the state-of-the-art kernel-based data fusion, but requires fewer data preprocessing steps.
Lanczos algorithm with matrix product states for dynamical correlation functions
NASA Astrophysics Data System (ADS)
Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.
2012-05-01
The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo
2013-02-01
We analyzed cross-correlations between price fluctuations of global financial indices (20 daily stock indices over the world) and local indices (daily indices of 200 companies in the Korean stock market) by using random matrix theory (RMT). We compared eigenvalues and components of the largest and the second largest eigenvectors of the cross-correlation matrix before, during, and after the global financial the crisis in the year 2008. We find that the majority of its eigenvalues fall within the RMT bounds [ λ -, λ +], where λ - and λ + are the lower and the upper bounds of the eigenvalues of random correlation matrices. The components of the eigenvectors for the largest positive eigenvalues indicate the identical financial market mode dominating the global and local indices. On the other hand, the components of the eigenvector corresponding to the second largest eigenvalue are positive and negative values alternatively. The components before the crisis change sign during the crisis, and those during the crisis change sign after the crisis. The largest inverse participation ratio (IPR) corresponding to the smallest eigenvector is higher after the crisis than during any other periods in the global and local indices. During the global financial the crisis, the correlations among the global indices and among the local stock indices are perturbed significantly. However, the correlations between indices quickly recover the trends before the crisis.
Two cases of matrix-producing carcinoma showing chondromyxoid matrix in cytological specimens.
Tajima, Shogo; Koda, Kenji
2015-01-01
Matrix-producing carcinoma (MPC) is extremely rare. Limited reports have described the cytological aspects of MPC. Herein, we present 2 cases of MPC, both of which showed ring-enhancement on magnetic resonance imaging (MRI) and chondromyxoid matrix on cytological specimens. In these cases, the diagnosis of MPC was preoperatively suspected. Recognizing extracellular matrix as chondromyxoid matrix on the cytological specimen is important in making a distinction between MPC and mucinous carcinoma. They share some features on cytology and MRI (ring-enhancement) but have different prognoses and involve different approaches for obtaining histological specimens for neoadjuvant therapy. The reason for the different approaches for obtaining the histological specimens is that tumor cells usually distribute peripherally in MPC in contrast to the relatively uniform distribution of mucinous carcinoma. Therefore, it would be helpful if the diagnosis of MPC can be suspected by examination of the cytological specimen.
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Matrix Training of Preliteracy Skills with Preschoolers with Autism
ERIC Educational Resources Information Center
Axe, Judah B.; Sainato, Diane M.
2010-01-01
Matrix training is a generative approach to instruction in which words are arranged in a matrix so that some multiword phrases are taught and others emerge without direct teaching. We taught 4 preschoolers with autism to follow instructions to perform action-picture combinations (e.g., circle the pepper, underline the deer). Each matrix contained…
Fushiki, Tadayoshi
2009-07-01
The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.
Medeiros Turra, Kely; Pineda Rivelli, Diogo; Berlanga de Moraes Barros, Silvia; Mesquita Pasqualoto, Kerly Fernanda
2016-07-01
A receptor-independent (RI) four-dimensional structure-activity relationship (4D-QSAR) formalism was applied to a set of sixty-four β-N-biaryl ether sulfonamide hydroxamate derivatives, previously reported as potent inhibitors against matrix metalloproteinase subtype 9 (MMP-9). MMP-9 belongs to a group of enzymes related to the cleavage of several extracellular matrix components and has been associated to cancer invasiveness/metastasis. The best RI 4D-QSAR model was statistically significant (N=47; r(2) =0.91; q(2) =0.83; LSE=0.09; LOF=0.35; outliers=0). Leave-N-out (LNO) and y-randomization approaches indicated the QSAR model was robust and presented no chance correlation, respectively. Furthermore, it also had good external predictability (82 %) regarding the test set (N=17). In addition, the grid cell occupancy descriptors (GCOD) of the predicted bioactive conformation for the most potent inhibitor were successfully interpreted when docked into the MMP-9 active site. The 3D-pharmacophore findings were used to predict novel ligands and exploit the MMP-9 calculated binding affinity through molecular docking procedure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cho, Yi Je; Lee, Wookjin; Park, Yong Ho
2017-01-01
The elastoplastic deformation behaviors of hollow glass microspheres/iron syntactic foam under tension were modeled using a representative volume element (RVE) approach. The three-dimensional microstructures of the iron syntactic foam with 5 wt % glass microspheres were reconstructed using the random sequential adsorption algorithm. The constitutive behavior of the elastoplasticity in the iron matrix and the elastic-brittle failure for the glass microsphere were simulated in the models. An appropriate RVE size was statistically determined by evaluating elastic modulus, Poisson’s ratio, and yield strength in terms of model sizes and boundary conditions. The model was validated by the agreement with experimental findings. The tensile deformation mechanism of the syntactic foam considering the fracture of the microspheres was then investigated. In addition, the feasibility of introducing the interfacial deboning behavior to the proposed model was briefly investigated to improve the accuracy in depicting fracture behaviors of the syntactic foam. It is thought that the modeling techniques and the model itself have major potential for applications not only in the study of hollow glass microspheres/iron syntactic foams, but also for the design of composites with a high modulus matrix and high strength reinforcement. PMID:29048346
Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.
Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A
2018-01-18
Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.
A density functional approach to ferrogels
NASA Astrophysics Data System (ADS)
Cremer, P.; Heinen, M.; Menzel, A. M.; Löwen, H.
2017-07-01
Ferrogels consist of magnetic colloidal particles embedded in an elastic polymer matrix. As a consequence, their structural and rheological properties are governed by a competition between magnetic particle-particle interactions and mechanical matrix elasticity. Typically, the particles are permanently fixed within the matrix, which makes them distinguishable by their positions. Over time, particle neighbors do not change due to the fixation by the matrix. Here we present a classical density functional approach for such ferrogels. We map the elastic matrix-induced interactions between neighboring colloidal particles distinguishable by their positions onto effective pairwise interactions between indistinguishable particles similar to a ‘pairwise pseudopotential’. Using Monte-Carlo computer simulations, we demonstrate for one-dimensional dipole-spring models of ferrogels that this mapping is justified. We then use the pseudopotential as an input into classical density functional theory of inhomogeneous fluids and predict the bulk elastic modulus of the ferrogel under various conditions. In addition, we propose the use of an ‘external pseudopotential’ when one switches from the viewpoint of a one-dimensional dipole-spring object to a one-dimensional chain embedded in an infinitely extended bulk matrix. Our mapping approach paves the way to describe various inhomogeneous situations of ferrogels using classical density functional concepts of inhomogeneous fluids.
BCD Beam Search: considering suboptimal partial solutions in Bad Clade Deletion supertrees.
Fleischauer, Markus; Böcker, Sebastian
2018-01-01
Supertree methods enable the reconstruction of large phylogenies. The supertree problem can be formalized in different ways in order to cope with contradictory information in the input. Some supertree methods are based on encoding the input trees in a matrix; other methods try to find minimum cuts in some graph. Recently, we introduced Bad Clade Deletion (BCD) supertrees which combines the graph-based computation of minimum cuts with optimizing a global objective function on the matrix representation of the input trees. The BCD supertree method has guaranteed polynomial running time and is very swift in practice. The quality of reconstructed supertrees was superior to matrix representation with parsimony (MRP) and usually on par with SuperFine for simulated data; but particularly for biological data, quality of BCD supertrees could not keep up with SuperFine supertrees. Here, we present a beam search extension for the BCD algorithm that keeps alive a constant number of partial solutions in each top-down iteration phase. The guaranteed worst-case running time of the new algorithm is still polynomial in the size of the input. We present an exact and a randomized subroutine to generate suboptimal partial solutions. Both beam search approaches consistently improve supertree quality on all evaluated datasets when keeping 25 suboptimal solutions alive. Supertree quality of the BCD Beam Search algorithm is on par with MRP and SuperFine even for biological data. This is the best performance of a polynomial-time supertree algorithm reported so far.
Phase diagram of matrix compressed sensing
NASA Astrophysics Data System (ADS)
Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka
2016-12-01
In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.
NASA Astrophysics Data System (ADS)
Jackisch, Conrad; van Schaik, Loes; Graeff, Thomas; Zehe, Erwin
2014-05-01
Preferential flow through macropores often determines hydrological characteristics - especially regarding runoff generation and fast transport of solutes. Macropore settings may yet be very different in nature and dynamics, depending on their origin. While biogenic structures follow activity cycles (e.g. earth worms) and population conditions (e.g. roots), pedogenic and geogenic structures may depend on water stress (e.g. cracks) or large events (e.g. flushed voids between skeleton and soil pipes) or simply persist (e.g. bedrock interface). On the one hand, such dynamic site characteristics can be observed in seasonal changes in its reaction to precipitation. On the other hand, sprinkling experiments accompanied by tracers or time-lapse 3D Ground-Penetrating-Radar are suitable tools to determine infiltration patterns and macropore configuration. However, model representation of the macropore-matrix system is still problematic, because models either rely on effective parameters (assuming well-mixed state) or on explicit advection strongly simplifying or neglecting interaction with the diffusive flow domain. Motivated by the dynamic nature of macropores, we present a novel model approach for interacting diffusive and advective water, solutes and energy transport in structured soils. It solely relies on scale- and process-aware observables. A representative set of macropores (data from sprinkling experiments) determines the process model scale through 1D advective domains. These are connected to a 2D matrix domain which is defined by pedo-physical retention properties. Water is represented as particles. Diffusive flow is governed by a 2D random walk of these particles while advection may take place in the macropore domain. Macropore-matrix interaction is computed as dissipation of the advective momentum of a particle by its experienced drag from the matrix domain. Through a representation of matrix and macropores as connected diffusive and advective domains for water transport we open up double domain concepts linking porescale physics to preferential macroscale fingerprints without effective parameterisation or mixing assumptions. Moreover, solute transport, energy balance aspects and lateral heterogeneity in soil moisture distribution are intrinsically captured. In addition, macropore and matrix domain settings may change over time based on physical and stochastic observations. The representativity concept allows scaleability from plotscale to the lower mesoscale.
Dissecting random and systematic differences between noisy composite data sets.
Diederichs, Kay
2017-04-01
Composite data sets measured on different objects are usually affected by random errors, but may also be influenced by systematic (genuine) differences in the objects themselves, or the experimental conditions. If the individual measurements forming each data set are quantitative and approximately normally distributed, a correlation coefficient is often used to compare data sets. However, the relations between data sets are not obvious from the matrix of pairwise correlations since the numerical value of the correlation coefficient is lowered by both random and systematic differences between the data sets. This work presents a multidimensional scaling analysis of the pairwise correlation coefficients which places data sets into a unit sphere within low-dimensional space, at a position given by their CC* values [as defined by Karplus & Diederichs (2012), Science, 336, 1030-1033] in the radial direction and by their systematic differences in one or more angular directions. This dimensionality reduction can not only be used for classification purposes, but also to derive data-set relations on a continuous scale. Projecting the arrangement of data sets onto the subspace spanned by systematic differences (the surface of a unit sphere) allows, irrespective of the random-error levels, the identification of clusters of closely related data sets. The method gains power with increasing numbers of data sets. It is illustrated with an example from low signal-to-noise ratio image processing, and an application in macromolecular crystallography is shown, but the approach is completely general and thus should be widely applicable.
NASA Astrophysics Data System (ADS)
Abid, Najmul; Mirkhalaf, Mohammad; Barthelat, Francois
2018-03-01
Natural materials such as nacre, collagen, and spider silk are composed of staggered stiff and strong inclusions in a softer matrix. This type of hybrid microstructure results in remarkable combinations of stiffness, strength, and toughness and it now inspires novel classes of high-performance composites. However, the analytical and numerical approaches used to predict and optimize the mechanics of staggered composites often neglect statistical variations and inhomogeneities, which may have significant impacts on modulus, strength, and toughness. Here we present an analysis of localization using small representative volume elements (RVEs) and large scale statistical volume elements (SVEs) based on the discrete element method (DEM). DEM is an efficient numerical method which enabled the evaluation of more than 10,000 microstructures in this study, each including about 5,000 inclusions. The models explore the combined effects of statistics, inclusion arrangement, and interface properties. We find that statistical variations have a negative effect on all properties, in particular on the ductility and energy absorption because randomness precipitates the localization of deformations. However, the results also show that the negative effects of random microstructures can be offset by interfaces with large strain at failure accompanied by strain hardening. More specifically, this quantitative study reveals an optimal range of interface properties where the interfaces are the most effective at delaying localization. These findings show how carefully designed interfaces in bioinspired staggered composites can offset the negative effects of microstructural randomness, which is inherent to most current fabrication methods.
Not all that glitters is RMT in the forecasting of risk of portfolios in the Brazilian stock market
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas; Bortoluzzo, Adriana Bruscato; Venezuela, Maria Kelly
2014-09-01
Using stocks of the Brazilian stock exchange (BM&F-Bovespa), we build portfolios of stocks based on Markowitz's theory and test the predicted and realized risks. This is done using the correlation matrices between stocks, and also using Random Matrix Theory in order to clean such correlation matrices from noise. We also calculate correlation matrices using a regression model in order to remove the effect of common market movements and their cleaned versions using Random Matrix Theory. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2012. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so. The results may be used in the assessment of the true risks when one builds a portfolio of stocks during periods of crisis.
Free Fermions and the Classical Compact Groups
NASA Astrophysics Data System (ADS)
Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil
2018-06-01
There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
Spectrum of the Wilson Dirac operator at finite lattice spacings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akemann, G.; Damgaard, P. H.; Splittorff, K.
2011-04-15
We consider the effect of discretization errors on the microscopic spectrum of the Wilson Dirac operator using both chiral perturbation theory and chiral random matrix theory. A graded chiral Lagrangian is used to evaluate the microscopic spectral density of the Hermitian Wilson Dirac operator as well as the distribution of the chirality over the real eigenvalues of the Wilson Dirac operator. It is shown that a chiral random matrix theory for the Wilson Dirac operator reproduces the leading zero-momentum terms of Wilson chiral perturbation theory. All results are obtained for a fixed index of the Wilson Dirac operator. The low-energymore » constants of Wilson chiral perturbation theory are shown to be constrained by the Hermiticity properties of the Wilson Dirac operator.« less
Anderson Localization in Quark-Gluon Plasma
NASA Astrophysics Data System (ADS)
Kovács, Tamás G.; Pittler, Ferenc
2010-11-01
At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.
Horizon in Random Matrix Theory, the Hawking Radiation, and Flow of Cold Atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franchini, Fabio; Kravtsov, Vladimir E.
2009-10-16
We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connectionmore » between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.« less
NASA Astrophysics Data System (ADS)
Jo, Sang Kyun; Kim, Min Jae; Lim, Kyuseong; Kim, Soo Yong
2018-02-01
We investigated the effect of foreign exchange rate in a correlation analysis of the Korean stock market using both random matrix theory and minimum spanning tree. We collected data sets which were divided into two types of stock price, the original stock price in Korean Won and the price converted into US dollars at contemporary foreign exchange rates. Comparing the random matrix theory based on the two different prices, a few particular sectors exhibited substantial differences while other sectors changed little. The particular sectors were closely related to economic circumstances and the influence of foreign financial markets during that period. The method introduced in this paper offers a way to pinpoint the effect of exchange rate on an emerging stock market.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
Jackson, Dan; White, Ian R; Riley, Richard D
2013-01-01
Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213
Finite-time scaling at the Anderson transition for vibrations in solids
NASA Astrophysics Data System (ADS)
Beltukov, Y. M.; Skipetrov, S. E.
2017-11-01
A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.
Approaches to polymer-derived CMC matrices
NASA Technical Reports Server (NTRS)
Hurwitz, Frances I.
1992-01-01
The use of polymeric precursors to ceramics permits the fabrication of large, complex-shaped ceramic matrix composites (CMC's) at temperatures which do not degrade the fiber. Processing equipment and techniques readily available in the resin matrix composite industry can be adapted for CMC fabrication using this approach. Criteria which influence the choice of candidate precursor polymers, the use of fillers, and the role of fiber architecture and ply layup are discussed. Three polymer systems, polycarbosilanes, polysilazanes, and polysilsesquioxanes, are compared as candidate ceramic matrix precursors.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2008-01-01
A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
NASA Astrophysics Data System (ADS)
Moraleda, Joaquín; Segurado, Javier; LLorca, Javier
2009-09-01
The in-plane finite deformation of incompressible fiber-reinforced elastomers was studied using computational micromechanics. Composite microstructure was made up of a random and homogeneous dispersion of aligned rigid fibers within a hyperelastic matrix. Different matrices (Neo-Hookean and Gent), fibers (monodisperse or polydisperse, circular or elliptical section) and reinforcement volume fractions (10-40%) were analyzed through the finite element simulation of a representative volume element of the microstructure. A successive remeshing strategy was employed when necessary to reach the large deformation regime in which the evolution of the microstructure influences the effective properties. The simulations provided for the first time "quasi-exact" results of the in-plane finite deformation for this class of composites, which were used to assess the accuracy of the available homogenization estimates for incompressible hyperelastic composites.
Multiscale Modeling of Thermal Conductivity of Polymer/Carbon Nanocomposites
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Frankland, Sarah-Jane V.; Hinkley, Jeffrey A.; Gates, Thomas S.
2010-01-01
Molecular dynamics simulation was used to estimate the interfacial thermal (Kapitza) resistance between nanoparticles and amorphous and crystalline polymer matrices. Bulk thermal conductivities of the nanocomposites were then estimated using an established effective medium approach. To study functionalization, oligomeric ethylene-vinyl alcohol copolymers were chemically bonded to a single wall carbon nanotube. The results, in a poly(ethylene-vinyl acetate) matrix, are similar to those obtained previously for grafted linear hydrocarbon chains. To study the effect of noncovalent functionalization, two types of polyethylene matrices. -- aligned (extended-chain crystalline) vs. amorphous (random coils) were modeled. Both matrices produced the same interfacial thermal resistance values. Finally, functionalization of edges and faces of plate-like graphite nanoparticles was found to be only modestly effective in reducing the interfacial thermal resistance and improving the composite thermal conductivity
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
2011-01-01
Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963
Quantifying economic fluctuations by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki
2001-09-01
The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix
Generation of physical random numbers by using homodyne detection
NASA Astrophysics Data System (ADS)
Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro
2016-10-01
Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
NASA Astrophysics Data System (ADS)
Bouhaj, M.; von Estorff, O.; Peiffer, A.
2017-09-01
In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.
Comprehensive Thematic T-Matrix Reference Database: A 2015-2017 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-01-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Comprehensive thematic T-matrix reference database: A 2015-2017 update
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-11-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Topological Distances Between Brain Networks
Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.
2018-01-01
Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.
Random lasing in dye-doped polymer dispersed liquid crystal film
NASA Astrophysics Data System (ADS)
Wu, Rina; Shi, Rui-xin; Wu, Xiaojiao; Wu, Jie; Dai, Qin
2016-09-01
A dye-doped polymer-dispersed liquid crystal film was designed and fabricated, and random lasing action was studied. A mixture of laser dye, nematic liquid crystal, chiral dopant, and PVA was used to prepare the dye-doped polymer-dispersed liquid crystal film by means of microcapsules. Scanning electron microscopy analysis showed that most liquid crystal droplets in the polymer matrix ranged from 30 μm to 40 μm, the size of the liquid crystal droplets was small. Under frequency doubled 532 nm Nd:YAG laser-pumped optical excitation, a plurality of discrete and sharp random laser radiation peaks could be measured in the range of 575-590 nm. The line-width of the lasing peak was 0.2 nm and the threshold of the random lasing was 9 mJ. Under heating, the emission peaks of random lasing disappeared. By detecting the emission light spot energy distribution, the mechanism of radiation was found to be random lasing. The random lasing radiation mechanism was then analyzed and discussed. Experimental results indicated that the size of the liquid crystal droplets is the decisive factor that influences the lasing mechanism. The surface anchor role can be ignored when the size of the liquid crystal droplets in the polymer matrix is small, which is beneficial to form multiple scattering. The transmission path of photons is similar to that in a ring cavity, providing feedback to obtain random lasing output. Project supported by the National Natural Science Foundation of China (Grant No. 61378042), the Colleges and Universities in Liaoning Province Outstanding Young Scholars Growth Plans, China (Grant No. LJQ2015093), and Shenyang Ligong University Laser and Optical Information of Liaoning Province Key Laboratory Open Funds, China.
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530