Large-deviation theory for diluted Wishart random matrices
NASA Astrophysics Data System (ADS)
Castillo, Isaac Pérez; Metz, Fernando L.
2018-03-01
Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
BCH codes for large IC random-access memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1983-01-01
In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
NASA Astrophysics Data System (ADS)
Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
2011-07-01
We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
Large leptonic Dirac CP phase from broken democracy with random perturbations
NASA Astrophysics Data System (ADS)
Ge, Shao-Feng; Kusenko, Alexander; Yanagida, Tsutomu T.
2018-06-01
A large value of the leptonic Dirac CP phase can arise from broken democracy, where the mass matrices are democratic up to small random perturbations. Such perturbations are a natural consequence of broken residual S3 symmetries that dictate the democratic mass matrices at leading order. With random perturbations, the leptonic Dirac CP phase has a higher probability to attain a value around ± π / 2. Comparing with the anarchy model, broken democracy can benefit from residual S3 symmetries, and it can produce much better, realistic predictions for the mass hierarchy, mixing angles, and Dirac CP phase in both quark and lepton sectors. Our approach provides a general framework for a class of models in which a residual symmetry determines the general features at leading order, and where, in the absence of other fundamental principles, the symmetry breaking appears in the form of random perturbations.
An efficient parallel-processing method for transposing large matrices in place.
Portnoff, M R
1999-01-01
We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Virial expansion for almost diagonal random matrices
NASA Astrophysics Data System (ADS)
Yevtushenko, Oleg; Kravtsov, Vladimir E.
2003-08-01
Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\
A comparison of SuperLU solvers on the intel MIC architecture
NASA Astrophysics Data System (ADS)
Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.
2016-10-01
In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.
SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere
NASA Astrophysics Data System (ADS)
Creasey, Peter; Lang, Annika
2018-04-01
SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
CMV matrices in random matrix theory and integrable systems: a survey
NASA Astrophysics Data System (ADS)
Nenciu, Irina
2006-07-01
We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.
Noisy covariance matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, S.; Kondor, I.
2002-05-01
According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
On Fluctuations of Eigenvalues of Random Band Matrices
NASA Astrophysics Data System (ADS)
Shcherbina, M.
2015-10-01
We consider the fluctuations of linear eigenvalue statistics of random band matrices whose entries have the form with i.i.d. possessing the th moment, where the function u has a finite support , so that M has only nonzero diagonals. The parameter b (called the bandwidth) is assumed to grow with n in a way such that . Without any additional assumptions on the growth of b we prove CLT for linear eigenvalue statistics for a rather wide class of test functions. Thus we improve and generalize the results of the previous papers (Jana et al., arXiv:1412.2445; Li et al. Random Matrices 2:04, 2013), where CLT was proven under the assumption . Moreover, we develop a method which allows to prove automatically the CLT for linear eigenvalue statistics of the smooth test functions for almost all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse matrices, diluted random matrices, matrices with heavy tales etc.
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
The feasibility and stability of large complex biological networks: a random matrix approach.
Stone, Lewi
2018-05-29
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
Spectral density of mixtures of random density matrices for qubits
NASA Astrophysics Data System (ADS)
Zhang, Lin; Wang, Jiamei; Chen, Zhihua
2018-06-01
We derive the spectral density of the equiprobable mixture of two random density matrices of a two-level quantum system. We also work out the spectral density of mixture under the so-called quantum addition rule. We use the spectral densities to calculate the average entropy of mixtures of random density matrices, and show that the average entropy of the arithmetic-mean-state of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble is never decreasing with the number n. We also get the exact value of the average squared fidelity. Some conjectures and open problems related to von Neumann entropy are also proposed.
NASA Astrophysics Data System (ADS)
Shy, L. Y.; Eichinger, B. E.
1989-05-01
Computer simulations of the formation of trifunctional and tetrafunctional polydimethyl-siloxane networks that are crosslinked by condensation of telechelic chains with multifunctional crosslinking agents have been carried out on systems containing up to 1.05×106 chains. Eigenvalue spectra of Kirchhoff matrices for these networks have been evaluated at two levels of approximation: (1) inclusion of all midchain modes, and (2) suppression of midchain modes. By use of the recursion method of Haydock and Nex, we have been able to effectively diagonalize matrices with 730 498 rows and columns without actually constructing matrices of this size. The small eigenvalues have been computed by use of the Lanczos algorithm. We demonstrate the following results: (1) The smallest eigenvalues (with chain modes suppressed) vary as μ-2/3 for sufficiently large μ, where μ is the number of junctions in the network; (2) the eigenvalue spectra of the Kirchhoff matrices are well described by McKay's theory for random regular graphs in the range of the larger eigenvalues, but there are significant departures in the region of small eigenvalues where computed spectra have many more small eigenvalues than random regular graphs; (3) the smallest eigenvalues vary as n-1.78 where n is the number of Rouse beads in the chains that comprise the network. Computations are done for both monodisperse and polydisperse chain length distributions. Large eigenvalues associated with localized motion of the junctions are found as predicted by theory. The relationship between the small eigenvalues and the equilibrium modulus of elasticity is discussed, as is the relationship between viscoelasticity and the band edge of the spectrum.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
Evolutionary Games with Randomly Changing Payoff Matrices
NASA Astrophysics Data System (ADS)
Yakushkina, Tatiana; Saakian, David B.; Bratus, Alexander; Hu, Chin-Kun
2015-06-01
Evolutionary games are used in various fields stretching from economics to biology. In most of these games a constant payoff matrix is assumed, although some works also consider dynamic payoff matrices. In this article we assume a possibility of switching the system between two regimes with different sets of payoff matrices. Potentially such a model can qualitatively describe the development of bacterial or cancer cells with a mutator gene present. A finite population evolutionary game is studied. The model describes the simplest version of annealed disorder in the payoff matrix and is exactly solvable at the large population limit. We analyze the dynamics of the model, and derive the equations for both the maximum and the variance of the distribution using the Hamilton-Jacobi equation formalism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
User-Friendly Tools for Random Matrices: An Introduction
2012-12-03
T 2011 , Oliveira 2010, Mackey et al . 2012, ... Joel A. Tropp, User-Friendly Tools for Random Matrices, NIPS, 3 December 2012 47 To learn more... E...the matrix product Y = AΩ 3. Construct an orthonormal basis Q for the range of Y [Ref] Halko –Martinsson–T, SIAM Rev. 2011 . Joel A. Tropp, User-Friendly...concentration inequalities...” with L. Mackey et al .. Submitted 2012. § “User-Friendly Tools for Random Matrices: An Introduction.” 2012. See also
Random matrices and condensation into multiple states
NASA Astrophysics Data System (ADS)
Sadeghi, Sina; Engel, Andreas
2018-03-01
In the present work, we employ methods from statistical mechanics of disordered systems to investigate static properties of condensation into multiple states in a general framework. We aim at showing how typical properties of random interaction matrices play a vital role in manifesting the statistics of condensate states. In particular, an analytical expression for the fraction of condensate states in the thermodynamic limit is provided that confirms the result of the mean number of coexisting species in a random tournament game. We also study the interplay between the condensation problem and zero-sum games with correlated random payoff matrices.
Eigenvalue density of cross-correlations in Sri Lankan financial market
NASA Astrophysics Data System (ADS)
Nilantha, K. G. D. R.; Ranasinghe; Malmini, P. K. C.
2007-05-01
We apply the universal properties with Gaussian orthogonal ensemble (GOE) of random matrices namely spectral properties, distribution of eigenvalues, eigenvalue spacing predicted by random matrix theory (RMT) to compare cross-correlation matrix estimators from emerging market data. The daily stock prices of the Sri Lankan All share price index and Milanka price index from August 2004 to March 2005 were analyzed. Most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree with the universal predictions of RMT. We find that the cross-correlation matrix satisfies the universal properties of the GOE of real symmetric random matrices. The eigen distribution follows the RMT predictions in the bulk but there are some deviations at the large eigenvalues. The nearest-neighbor spacing and the next nearest-neighbor spacing of the eigenvalues were examined and found that they follow the universality of GOE. RMT with deterministic correlations found that each eigenvalue from deterministic correlations is observed at values, which are repelled from the bulk distribution.
Bayes linear covariance matrix adjustment
NASA Astrophysics Data System (ADS)
Wilkinson, Darren J.
1995-12-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.
Optimal neighborhood indexing for protein similarity search.
Peterlongo, Pierre; Noé, Laurent; Lavenier, Dominique; Nguyen, Van Hoa; Kucherov, Gregory; Giraud, Mathieu
2008-12-16
Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website http://bioinfo.lifl.fr/reblosum. We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction.
Optimal neighborhood indexing for protein similarity search
Peterlongo, Pierre; Noé, Laurent; Lavenier, Dominique; Nguyen, Van Hoa; Kucherov, Gregory; Giraud, Mathieu
2008-01-01
Background Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. Results The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website . Conclusion We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction. PMID:19087280
Non-Hermitian localization in biological networks.
Amir, Ariel; Hatano, Naomichi; Nelson, David R
2016-04-01
We explore the spectra and localization properties of the N-site banded one-dimensional non-Hermitian random matrices that arise naturally in sparse neural networks. Approximately equal numbers of random excitatory and inhibitory connections lead to spatially localized eigenfunctions and an intricate eigenvalue spectrum in the complex plane that controls the spontaneous activity and induced response. A finite fraction of the eigenvalues condense onto the real or imaginary axes. For large N, the spectrum has remarkable symmetries not only with respect to reflections across the real and imaginary axes but also with respect to 90^{∘} rotations, with an unusual anisotropic divergence in the localization length near the origin. When chains with periodic boundary conditions become directed, with a systematic directional bias superimposed on the randomness, a hole centered on the origin opens up in the density-of-states in the complex plane. All states are extended on the rim of this hole, while the localized eigenvalues outside the hole are unchanged. The bias-dependent shape of this hole tracks the bias-independent contours of constant localization length. We treat the large-N limit by a combination of direct numerical diagonalization and using transfer matrices, an approach that allows us to exploit an electrostatic analogy connecting the "charges" embodied in the eigenvalue distribution with the contours of constant localization length. We show that similar results are obtained for more realistic neural networks that obey "Dale's law" (each site is purely excitatory or inhibitory) and conclude with perturbation theory results that describe the limit of large directional bias, when all states are extended. Related problems arise in random ecological networks and in chains of artificial cells with randomly coupled gene expression patterns.
Non-Hermitian localization in biological networks
NASA Astrophysics Data System (ADS)
Amir, Ariel; Hatano, Naomichi; Nelson, David R.
2016-04-01
We explore the spectra and localization properties of the N -site banded one-dimensional non-Hermitian random matrices that arise naturally in sparse neural networks. Approximately equal numbers of random excitatory and inhibitory connections lead to spatially localized eigenfunctions and an intricate eigenvalue spectrum in the complex plane that controls the spontaneous activity and induced response. A finite fraction of the eigenvalues condense onto the real or imaginary axes. For large N , the spectrum has remarkable symmetries not only with respect to reflections across the real and imaginary axes but also with respect to 90∘ rotations, with an unusual anisotropic divergence in the localization length near the origin. When chains with periodic boundary conditions become directed, with a systematic directional bias superimposed on the randomness, a hole centered on the origin opens up in the density-of-states in the complex plane. All states are extended on the rim of this hole, while the localized eigenvalues outside the hole are unchanged. The bias-dependent shape of this hole tracks the bias-independent contours of constant localization length. We treat the large-N limit by a combination of direct numerical diagonalization and using transfer matrices, an approach that allows us to exploit an electrostatic analogy connecting the "charges" embodied in the eigenvalue distribution with the contours of constant localization length. We show that similar results are obtained for more realistic neural networks that obey "Dale's law" (each site is purely excitatory or inhibitory) and conclude with perturbation theory results that describe the limit of large directional bias, when all states are extended. Related problems arise in random ecological networks and in chains of artificial cells with randomly coupled gene expression patterns.
Random matrices with external source and the asymptotic behaviour of multiple orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aptekarev, Alexander I; Lysov, Vladimir G; Tulyakov, Dmitrii N
2011-02-28
Ensembles of random Hermitian matrices with a distribution measure defined by an anharmonic potential perturbed by an external source are considered. The limiting characteristics of the eigenvalue distribution of the matrices in these ensembles are related to the asymptotic behaviour of a certain system of multiple orthogonal polynomials. Strong asymptotic formulae are derived for this system. As a consequence, for matrices in this ensemble the limit mean eigenvalue density is found, and a variational principle is proposed to characterize this density. Bibliography: 35 titles.
ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog
NASA Technical Reports Server (NTRS)
Gray, F. P., Jr. (Editor)
1979-01-01
A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
Rational decisions, random matrices and spin glasses
NASA Astrophysics Data System (ADS)
Galluccio, Stefano; Bouchaud, Jean-Philippe; Potters, Marc
We consider the problem of rational decision making in the presence of nonlinear constraints. By using tools borrowed from spin glass and random matrix theory, we focus on the portfolio optimisation problem. We show that the number of optimal solutions is generally exponentially large, and each of them is fragile: rationality is in this case of limited use. In addition, this problem is related to spin glasses with Lévy-like (long-ranged) couplings, for which we show that the ground state is not exponentially degenerate.
Central Limit Theorems for Linear Statistics of Heavy Tailed Random Matrices
NASA Astrophysics Data System (ADS)
Benaych-Georges, Florent; Guionnet, Alice; Male, Camille
2014-07-01
We show central limit theorems (CLT) for the linear statistics of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of α-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.
Singular Behavior of the Leading Lyapunov Exponent of a Product of Random {2 × 2} Matrices
NASA Astrophysics Data System (ADS)
Genovese, Giuseppe; Giacomin, Giambattista; Greenblatt, Rafael Leon
2017-05-01
We consider a certain infinite product of random {2 × 2} matrices appearing in the solution of some 1 and 1 + 1 dimensional disordered models in statistical mechanics, which depends on a parameter ɛ > 0 and on a real random variable with distribution {μ}. For a large class of {μ}, we prove the prediction by Derrida and Hilhorst (J Phys A 16:2641, 1983) that the Lyapunov exponent behaves like {C ɛ^{2 α}} in the limit {ɛ \\searrow 0}, where {α \\in (0,1)} and {C > 0} are determined by {μ}. Derrida and Hilhorst performed a two-scale analysis of the integral equation for the invariant distribution of the Markov chain associated to the matrix product and obtained a probability measure that is expected to be close to the invariant one for small {ɛ}. We introduce suitable norms and exploit contractivity properties to show that such a probability measure is indeed close to the invariant one in a sense that implies a suitable control of the Lyapunov exponent.
Medium-induced change of the optical response of metal clusters in rare-gas matrices
NASA Astrophysics Data System (ADS)
Xuan, Fengyuan; Guet, Claude
2017-10-01
Interaction with the surrounding medium modifies the optical response of embedded metal clusters. For clusters from about ten to a few hundreds of silver atoms, embedded in rare-gas matrices, we study the environment effect within the matrix random phase approximation with exact exchange (RPAE) quantum approach, which has proved successful for free silver clusters. The polarizable surrounding medium screens the residual two-body RPAE interaction, adds a polarization term to the one-body potential, and shifts the vacuum energy of the active delocalized valence electrons. Within this model, we calculate the dipole oscillator strength distribution for Ag clusters embedded in helium droplets, neon, argon, krypton, and xenon matrices. The main contribution to the dipole surface plasmon red shift originates from the rare-gas polarization screening of the two-body interaction. The large size limit of the dipole surface plasmon agrees well with the classical prediction.
Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties
NASA Astrophysics Data System (ADS)
Karp, Jonathan; Torres-Herrera, Jonathan; TáVora, Marco; Santos, Lea
We study the static and dynamical properties of isolated spin 1/2 systems as prototypes of many-body quantum systems and compare the results to those of full random matrices from a Gaussian orthogonal ensemble. Full random matrices do not represent realistic systems, because they imply that all particles interact at the same time, as opposed to realistic Hamiltonians, which are sparse and have only few-body interactions. Nevertheless, with full random matrices we can derive analytical results that can be used as references and bounds for the corresponding properties of realistic systems. In particular, we show that the results for the Shannon information entropy are very similar to those for the von Neumann entanglement entropy, with the former being computationally less expensive. We also discuss the behavior of the survival probability of the initial state at different time scales and show that it contains more information about the system than the entropies. Support from the NSF Grant No. DMR-1147430.
Disentangling giant component and finite cluster contributions in sparse random matrix spectra.
Kühn, Reimer
2016-04-01
We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Universality in chaos: Lyapunov spectrum and random matrix theory.
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Universality in chaos: Lyapunov spectrum and random matrix theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
The q-dependent detrended cross-correlation analysis of stock market
NASA Astrophysics Data System (ADS)
Zhao, Longfeng; Li, Wei; Fenu, Andrea; Podobnik, Boris; Wang, Yougui; Stanley, H. Eugene
2018-02-01
Properties of the q-dependent cross-correlation matrices of the stock market have been analyzed by using random matrix theory and complex networks. The correlation structures of the fluctuations at different magnitudes have unique properties. The cross-correlations among small fluctuations are much stronger than those among large fluctuations. The large and small fluctuations are dominated by different groups of stocks. We use complex network representation to study these q-dependent matrices and discover some new identities. By utilizing those q-dependent correlation-based networks, we are able to construct some portfolios of those more independent stocks which consistently perform better. The optimal multifractal order for portfolio optimization is around q = 2 under the mean-variance portfolio framework, and q\\in[2, 6] under the expected shortfall criterion. These results have deepened our understanding regarding the collective behavior of the complex financial system.
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas
2008-04-01
A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasiviswanathan, Shiva; Rudelson, Mark; Smith, Adam
2009-01-01
Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical data. In this paper, we consider lower bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. We extend a line of recent work on lower bounds on noise for private data analysis [10, 13. 14, 15] to a natural and important class of functionalities. Our investigation also leads to new results on the spectra of random matrices with correlated rows. Consider a database D consisting of n rows (one per individual),more » each row comprising d binary attributes. For any subset of T attributes of size |T| = k, the marginal table for T has 2{sup k} entries; each entry counts how many times in the database a particular setting of these attributes occurs. Imagine an agency that wishes to release all (d/k) contingency tables for a given database. For constant k, previous work showed that distortion {tilde {Omicron}}(min{l_brace}n, (n{sup 2}d){sup 1/3}, {radical}d{sup k}{r_brace}) is sufficient for satisfying differential privacy, a rigorous definition of privacy that has received extensive recent study. Our main contributions are: (1) For {epsilon}- and ({epsilon}, {delta})-differential privacy (with {epsilon} constant and {delta} = 1/poly(n)), we give a lower bound of {tilde {Omega}}(min{l_brace}{radical}n, {radical}d{sup k}{r_brace}), which is tight for n = {tilde {Omega}}(d{sup k}). Moreover, for a natural and popular class of mechanisms based on additive noise, our bound can be strengthened to {Omega}({radical}d{sup k}), which is tight for all n. Our bounds extend even to non-constant k, losing roughly a factor of {radical}2{sup k} compared to the best known upper bounds for large n. (2) We give efficient polynomial time attacks which allow an adversary to reconstruct sensitive infonnation given insufficiently perturbed contingency table releases. For constant k, we obtain a lower bound of {tilde {Omega}}(min{l_brace}{radical}n, {radical}d{sup k}{r_brace}) that applies to a large class of privacy notions, including K-anonymity (along with its variants) and differential privacy. In contrast to our bounds for differential privacy, this bound (a) is shown only for constant k, but (b) is tight for all values of n when k is constant. (3) Our reconstruction-based attacks require a new lower bound on the least singular values of random matrices with correlated rows. For a constant k, consider a matrix M with (d/k) rows which are formed by taking all possible k-way entry-wise products of an underlying set of d random vectors. We show that even for nearly square matrices with d{sup k}/log d columns, the least singular value is {Omega}({radical}d{sup k}) with high probability - asymptotically, the same bound as one gets for a matrix with independent rows. The proof requires several new ideas for analyzing random matrices and could be of independent interest.« less
Universality for 1d Random Band Matrices: Sigma-Model Approximation
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
Localization in covariance matrices of coupled heterogenous Ornstein-Uhlenbeck processes
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2014-12-01
We define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix. The spectral properties of this ensemble are shown to be in qualitative agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time series. Furthermore evidence for a localization transition in eigenvectors related to small and large eigenvalues in cross-correlations analysis of this model is found, and a simple explanation of localization phenomena in financial time series is provided. Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high- and low-temperature components are the most localized ones.
Bazzani, Armando; Castellani, Gastone C; Cooper, Leon N
2010-05-01
We analyze the effects of noise correlations in the input to, or among, Bienenstock-Cooper-Munro neurons using the Wigner semicircular law to construct random, positive-definite symmetric correlation matrices and compute their eigenvalue distributions. In the finite dimensional case, we compare our analytic results with numerical simulations and show the effects of correlations on the lifetimes of synaptic strengths in various visual environments. These correlations can be due either to correlations in the noise from the input lateral geniculate nucleus neurons, or correlations in the variability of lateral connections in a network of neurons. In particular, we find that for fixed dimensionality, a large noise variance can give rise to long lifetimes of synaptic strengths. This may be of physiological significance.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices
NASA Astrophysics Data System (ADS)
Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.
2017-08-01
We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Computation of transform domain covariance matrices
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1975-01-01
It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
Intermittency and random matrices
NASA Astrophysics Data System (ADS)
Sokoloff, Dmitry; Illarionov, E. A.
2015-08-01
A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.
Crossover ensembles of random matrices and skew-orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in
2011-08-15
Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.
1990-01-01
Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.
Uncovering the spatial structure of mobility networks
NASA Astrophysics Data System (ADS)
Louail, Thomas; Lenormand, Maxime; Picornell, Miguel; García Cantú, Oliva; Herranz, Ricardo; Frias-Martinez, Enrique; Ramasco, José J.; Barthelemy, Marc
2015-01-01
The extraction of a clear and simple footprint of the structure of large, weighted and directed networks is a general problem that has relevance for many applications. An important example is seen in origin-destination matrices, which contain the complete information on commuting flows, but are difficult to analyze and compare. We propose here a versatile method, which extracts a coarse-grained signature of mobility networks, under the form of a 2 × 2 matrix that separates the flows into four categories. We apply this method to origin-destination matrices extracted from mobile phone data recorded in 31 Spanish cities. We show that these cities essentially differ by their proportion of two types of flows: integrated (between residential and employment hotspots) and random flows, whose importance increases with city size. Finally, the method allows the determination of categories of networks, and in the mobility case, the classification of cities according to their commuting structure.
Simple techniques for improving deep neural network outcomes on commodity hardware
NASA Astrophysics Data System (ADS)
Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.
2017-08-01
We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2017-07-01
This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.
SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.
Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen
2012-07-23
We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.
Atomistic modeling of thermomechanical properties of SWNT/Epoxy nanocomposites
NASA Astrophysics Data System (ADS)
Fasanella, Nicholas; Sundararaghavan, Veera
2015-09-01
Molecular dynamics simulations are performed to compute thermomechanical properties of cured epoxy resins reinforced with pristine and covalently functionalized carbon nanotubes. A DGEBA-DDS epoxy network was built using the ‘dendrimer’ growth approach where 75% of available epoxy sites were cross-linked. The epoxy model is verified through comparisons to experiments, and simulations are performed on nanotube reinforced cross-linked epoxy matrix using the CVFF force field in LAMMPS. Full stiffness matrices and linear coefficient of thermal expansion vectors are obtained for the nanocomposite. Large increases in stiffness and large decreases in thermal expansion were seen along the direction of the nanotube for both nanocomposite systems when compared to neat epoxy. The direction transverse to nanotube saw a 40% increase in stiffness due to covalent functionalization over neat epoxy at 1 K whereas the pristine nanotube system only saw a 7% increase due to van der Waals effects. The functionalized SWNT/epoxy nanocomposite showed an additional 42% decrease in thermal expansion along the nanotube direction when compared to the pristine SWNT/epoxy nanocomposite. The stiffness matrices are rotated over every possible orientation to simulate the effects of an isotropic system of randomly oriented nanotubes in the epoxy. The randomly oriented covalently functionalized SWNT/Epoxy nanocomposites showed substantial improvements over the plain epoxy in terms of higher stiffness (200% increase) and lower thermal expansion (32% reduction). Through MD simulations, we develop means to build simulation cells, perform annealing to reach correct densities, compute thermomechanical properties and compare with experiments.
ERIC Educational Resources Information Center
Prevost, A. Toby; Mason, Dan; Griffin, Simon; Kinmonth, Ann-Louise; Sutton, Stephen; Spiegelhalter, David
2007-01-01
Practical meta-analysis of correlation matrices generally ignores covariances (and hence correlations) between correlation estimates. The authors consider various methods for allowing for covariances, including generalized least squares, maximum marginal likelihood, and Bayesian approaches, illustrated using a 6-dimensional response in a series of…
Structure-Function Network Mapping and Its Assessment via Persistent Homology
2017-01-01
Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Emergence of a spectral gap in a class of random matrices associated with split graphs
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Zia, R. K. P.
2018-01-01
Motivated by the intriguing behavior displayed in a dynamic network that models a population of extreme introverts and extroverts (XIE), we consider the spectral properties of ensembles of random split graph adjacency matrices. We discover that, in general, a gap emerges in the bulk spectrum between -1 and 0 that contains a single eigenvalue. An analytic expression for the bulk distribution is derived and verified with numerical analysis. We also examine their relation to chiral ensembles, which are associated with bipartite graphs.
Properties of networks with partially structured and partially random connectivity
NASA Astrophysics Data System (ADS)
Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.
2015-01-01
Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .
Communication Optimal Parallel Multiplication of Sparse Random Matrices
2013-02-21
Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That
RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchel, K.; Greibach, R.J.; Porter, C.E.
A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Spectral analysis of finite-time correlation matrices near equilibrium phase transitions
NASA Astrophysics Data System (ADS)
Vinayak; Prosen, T.; Buča, B.; Seligman, T. H.
2014-10-01
We study spectral densities for systems on lattices, which, at a phase transition display, power-law spatial correlations. Constructing the spatial correlation matrix we prove that its eigenvalue density shows a power law that can be derived from the spatial correlations. In practice time series are short in the sense that they are either not stationary over long time intervals or not available over long time intervals. Also we usually do not have time series for all variables available. We shall make numerical simulations on a two-dimensional Ising model with the usual Metropolis algorithm as time evolution. Using all spins on a grid with periodic boundary conditions we find a power law, that is, for large grids, compatible with the analytic result. We still find a power law even if we choose a fairly small subset of grid points at random. The exponents of the power laws will be smaller under such circumstances. For very short time series leading to singular correlation matrices we use a recently developed technique to lift the degeneracy at zero in the spectrum and find a significant signature of critical behavior even in this case as compared to high temperature results which tend to those of random matrix models.
Optical image encryption using triplet of functions
NASA Astrophysics Data System (ADS)
Yatish; Fatima, Areeba; Nishchal, Naveen Kumar
2018-03-01
We propose an image encryption scheme that brings into play a technique using a triplet of functions to manipulate complex-valued functions. Optical cryptosystems using this method are an easier approach toward the ciphertext generation that avoids the use of holographic setup to record phase. The features of this method were shown in the context of double random phase encoding and phase-truncated Fourier transform-based cryptosystems using gyrator transform. In the first step, the complex function is split into two matrices. These matrices are separated, so they contain the real and imaginary parts. In the next step, these two matrices and a random distribution function are acted upon by one of the functions in the triplet. During decryption, the other two functions in the triplet help us retrieve the complex-valued function. The simulation results demonstrate the effectiveness of the proposed idea. To check the robustness of the proposed scheme, attack analyses were carried out.
Fluctuations of Wigner-type random matrices associated with symmetric spaces of class DIII and CI
NASA Astrophysics Data System (ADS)
Stolz, Michael
2018-02-01
Wigner-type randomizations of the tangent spaces of classical symmetric spaces can be thought of as ordinary Wigner matrices on which additional symmetries have been imposed. In particular, they fall within the scope of a framework, due to Schenker and Schulz-Baldes, for the study of fluctuations of Wigner matrices with additional dependencies among their entries. In this contribution, we complement the results of these authors by explicit calculations of the asymptotic covariances for symmetry classes DIII and CI and thus obtain explicit CLTs for these classes. On the technical level, the present work is an exercise in controlling the cumulative effect of systematically occurring sign factors in an involved sum of products by setting up a suitable combinatorial model for the summands. This aspect may be of independent interest. Research supported by Deutsche Forschungsgemeinschaft (DFG) via SFB 878.
A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.
Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei
2015-11-01
A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.
NASA Astrophysics Data System (ADS)
Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe
2017-06-01
Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p
The asymptotic spectra of banded Toeplitz and quasi-Toeplitz matrices
NASA Technical Reports Server (NTRS)
Beam, Richard M.; Warming, Robert F.
1991-01-01
Toeplitz matrices occur in many mathematical, as well as, scientific and engineering investigations. This paper considers the spectra of banded Toeplitz and quasi-Toeplitz matrices with emphasis on non-normal matrices of arbitrarily large order and relatively small bandwidth. These are the type of matrices that appear in the investigation of stability and convergence of difference approximations to partial differential equations. Quasi-Toeplitz matrices are the result of non-Dirichlet boundary conditions for the difference approximations. The eigenvalue problem for a banded Toeplitz or quasi-Toeplitz matrix of large order is, in general, analytically intractable and (for non-normal matrices) numerically unreliable. An asymptotic (matrix order approaches infinity) approach partitions the eigenvalue analysis of a quasi-Toeplitz matrix into two parts, namely the analysis for the boundary condition independent spectrum and the analysis for the boundary condition dependent spectrum. The boundary condition independent spectrum is the same as the pure Toeplitz matrix spectrum. Algorithms for computing both parts of the spectrum are presented. Examples are used to demonstrate the utility of the algorithms, to present some interesting spectra, and to point out some of the numerical difficulties encountered when conventional matrix eigenvalue routines are employed for non-normal matrices of large order. The analysis for the Toeplitz spectrum also leads to a diagonal similarity transformation that improves conventional numerical eigenvalue computations. Finally, the algorithm for the asymptotic spectrum is extended to the Toeplitz generalized eigenvalue problem which occurs, for example, in the stability of Pade type difference approximations to differential equations.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan
2017-03-01
Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.
Hurwitz numbers and products of random matrices
NASA Astrophysics Data System (ADS)
Orlov, A. Yu.
2017-09-01
We study multimatrix models, which may be viewed as integrals of products of tau functions depending on the eigenvalues of products of random matrices. We consider tau functions of the two-component Kadomtsev-Petviashvili (KP) hierarchy (semi-infinite relativistic Toda lattice) and of the B-type KP (BKP) hierarchy introduced by Kac and van de Leur. Such integrals are sometimes tau functions themselves. We consider models that generate Hurwitz numbers HE,F, where E is the Euler characteristic of the base surface and F is the number of branch points. We show that in the case where the integrands contain the product of n > 2 matrices, the integral generates Hurwitz numbers with E ≤ 2 and F ≤ n+2. Both the numbers E and F depend both on n and on the order of the factors in the matrix product. The Euler characteristic E can be either an even or an odd number, i.e., it can match both orientable and nonorientable (Klein) base surfaces depending on the presence of the tau function of the BKP hierarchy in the integrand. We study two cases, the products of complex and the products of unitary matrices.
On the analogues of Szegő's theorem for ergodic operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirsch, W; Pastur, L A
2015-01-31
Szegő's theorem on the asymptotic behaviour of the determinants of large Toeplitz matrices is generalized to the class of ergodic operators. The generalization is formulated in terms of a triple consisting of an ergodic operator and two functions, the symbol and the test function. It is shown that in the case of the one-dimensional discrete Schrödinger operator with random ergodic or quasiperiodic potential and various choices of the symbol and the test function this generalization leads to asymptotic formulae which have no analogues in the situation of Toeplitz operators. Bibliography: 22 titles.
A new phase of disordered phonons modelled by random matrices
NASA Astrophysics Data System (ADS)
Schmittner, Sebastian; Zirnbauer, Martin
2015-03-01
Starting from the clean harmonic crystal and not invoking two-level systems, we propose a model for phonons in a disordered solid. In this model the strength of mass and spring constant disorder can be increased separately. Both types of disorder are modelled by random matrices that couple the degrees of freedom locally. Treated in coherent potential approximation (CPA), the speed of sound decreases with increasing disorder until it reaches zero at finite disorder strength. There, a critical transition to a strong disorder phase occurs. In this novel phase, we find the density of states at zero energy in three dimensions to be finite, leading to a linear temperature dependence of the heat capacity, as observed experimentally for vitreous systems. For any disorder strength, our model is stable, i.e. masses and spring constants are positive, and there are no runaway dynamics. This is ensured by using appropriate probability distributions, inspired by Wishart ensembles, for the random matrices. The CPA self-consistency equations are derived in a very accessible way using planar diagrams. The talk focuses on the model and the results. The first author acknowledges financial support by the Deutsche Telekom Stiftung.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adachi, Satoshi; Toda, Mikito; Kubotani, Hiroto
The fixed-trace ensemble of random complex matrices is the fundamental model that excellently describes the entanglement in the quantum states realized in a coupled system by its strongly chaotic dynamical evolution [see H. Kubotani, S. Adachi, M. Toda, Phys. Rev. Lett. 100 (2008) 240501]. The fixed-trace ensemble fully takes into account the conservation of probability for quantum states. The present paper derives for the first time the exact analytical formula of the one-body distribution function of singular values of random complex matrices in the fixed-trace ensemble. The distribution function of singular values (i.e. Schmidt eigenvalues) of a quantum state ismore » so important since it describes characteristics of the entanglement in the state. The derivation of the exact analytical formula utilizes two recent achievements in mathematics, which appeared in 1990s. The first is the Kaneko theory that extends the famous Selberg integral by inserting a hypergeometric type weight factor into the integrand to obtain an analytical formula for the extended integral. The second is the Petkovsek-Wilf-Zeilberger theory that calculates definite hypergeometric sums in a closed form.« less
Noisy covariance matrices and portfolio optimization II
NASA Astrophysics Data System (ADS)
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the effect of noise on the measured risk may become very small.
Taylor, Sandra L; Ruhaak, L Renee; Kelly, Karen; Weiss, Robert H; Kim, Kyoungmi
2017-03-01
With expanded access to, and decreased costs of, mass spectrometry, investigators are collecting and analyzing multiple biological matrices from the same subject such as serum, plasma, tissue and urine to enhance biomarker discoveries, understanding of disease processes and identification of therapeutic targets. Commonly, each biological matrix is analyzed separately, but multivariate methods such as MANOVAs that combine information from multiple biological matrices are potentially more powerful. However, mass spectrometric data typically contain large amounts of missing values, and imputation is often used to create complete data sets for analysis. The effects of imputation on multiple biological matrix analyses have not been studied. We investigated the effects of seven imputation methods (half minimum substitution, mean substitution, k-nearest neighbors, local least squares regression, Bayesian principal components analysis, singular value decomposition and random forest), on the within-subject correlation of compounds between biological matrices and its consequences on MANOVA results. Through analysis of three real omics data sets and simulation studies, we found the amount of missing data and imputation method to substantially change the between-matrix correlation structure. The magnitude of the correlations was generally reduced in imputed data sets, and this effect increased with the amount of missing data. Significant results from MANOVA testing also were substantially affected. In particular, the number of false positives increased with the level of missing data for all imputation methods. No one imputation method was universally the best, but the simple substitution methods (Half Minimum and Mean) consistently performed poorly. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The performance of the Congruence Among Distance Matrices (CADM) test in phylogenetic analysis
2011-01-01
Background CADM is a statistical test used to estimate the level of Congruence Among Distance Matrices. It has been shown in previous studies to have a correct rate of type I error and good power when applied to dissimilarity matrices and to ultrametric distance matrices. Contrary to most other tests of incongruence used in phylogenetic analysis, the null hypothesis of the CADM test assumes complete incongruence of the phylogenetic trees instead of congruence. In this study, we performed computer simulations to assess the type I error rate and power of the test. It was applied to additive distance matrices representing phylogenies and to genetic distance matrices obtained from nucleotide sequences of different lengths that were simulated on randomly generated trees of varying sizes, and under different evolutionary conditions. Results Our results showed that the test has an accurate type I error rate and good power. As expected, power increased with the number of objects (i.e., taxa), the number of partially or completely congruent matrices and the level of congruence among distance matrices. Conclusions Based on our results, we suggest that CADM is an excellent candidate to test for congruence and, when present, to estimate its level in phylogenomic studies where numerous genes are analysed simultaneously. PMID:21388552
Products of random matrices from fixed trace and induced Ginibre ensembles
NASA Astrophysics Data System (ADS)
Akemann, Gernot; Cikovic, Milan
2018-05-01
We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M ‑ m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.
Learning in the Machine: Random Backpropagation and the Deep Learning Channel.
Baldi, Pierre; Sadowski, Peter; Lu, Zhiqin
2018-07-01
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of non-linear chains, when the derivative of the activation functions is included.
A new measure based on degree distribution that links information theory and network graph analysis
2012-01-01
Background Detailed connection maps of human and nonhuman brains are being generated with new technologies, and graph metrics have been instrumental in understanding the general organizational features of these structures. Neural networks appear to have small world properties: they have clustered regions, while maintaining integrative features such as short average pathlengths. Results We captured the structural characteristics of clustered networks with short average pathlengths through our own variable, System Difference (SD), which is computationally simple and calculable for larger graph systems. SD is a Jaccardian measure generated by averaging all of the differences in the connection patterns between any two nodes of a system. We calculated SD over large random samples of matrices and found that high SD matrices have a low average pathlength and a larger number of clustered structures. SD is a measure of degree distribution with high SD matrices maximizing entropic properties. Phi (Φ), an information theory metric that assesses a system’s capacity to integrate information, correlated well with SD - with SD explaining over 90% of the variance in systems above 11 nodes (tested for 4 to 13 nodes). However, newer versions of Φ do not correlate well with the SD metric. Conclusions The new network measure, SD, provides a link between high entropic structures and degree distributions as related to small world properties. PMID:22726594
Random matrices and the New York City subway system
NASA Astrophysics Data System (ADS)
Jagannath, Aukosh; Trogdon, Thomas
2017-09-01
We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.
The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less
Pitchers, W. R.; Brooks, R.; Jennions, M. D.; Tregenza, T.; Dworkin, I.; Hunt, J.
2013-01-01
Phenotypic integration and plasticity are central to our understanding of how complex phenotypic traits evolve. Evolutionary change in complex quantitative traits can be predicted using the multivariate breeders’ equation, but such predictions are only accurate if the matrices involved are stable over evolutionary time. Recent work, however, suggests that these matrices are temporally plastic, spatially variable and themselves evolvable. The data available on phenotypic variance-covariance matrix (P) stability is sparse, and largely focused on morphological traits. Here we compared P for the structure of the complex sexual advertisement call of six divergent allopatric populations of the Australian black field cricket, Teleogryllus commodus. We measured a subset of calls from wild-caught crickets from each of the populations and then a second subset after rearing crickets under common-garden conditions for three generations. In a second experiment, crickets from each population were reared in the laboratory on high- and low-nutrient diets and their calls recorded. In both experiments, we estimated P for call traits and used multiple methods to compare them statistically (Flury hierarchy, geometric subspace comparisons and random skewers). Despite considerable variation in means and variances of individual call traits, the structure of P was largely conserved among populations, across generations and between our rearing diets. Our finding that P remains largely stable, among populations and between environmental conditions, suggests that selection has preserved the structure of call traits in order that they can function as an integrated unit. PMID:23530814
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Correlations of RMT characteristic polynomials and integrability: Hermitean matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osipov, Vladimir Al., E-mail: Vladimir.Osipov@uni-due.d; Kanzieper, Eugene, E-mail: Eugene.Kanzieper@hit.ac.i; Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100
Integrable theory is formulated for correlation functions of characteristic polynomials associated with invariant non-Gaussian ensembles of Hermitean random matrices. By embedding the correlation functions of interest into a more general theory of {tau} functions, we (i) identify a zoo of hierarchical relations satisfied by {tau} functions in an abstract infinite-dimensional space and (ii) present a technology to translate these relations into hierarchically structured nonlinear differential equations describing the correlation functions of characteristic polynomials in the physical, spectral space. Implications of this formalism for fermionic, bosonic, and supersymmetric variations of zero-dimensional replica field theories are discussed at length. A particular emphasismore » is placed on the phenomenon of fermionic-bosonic factorisation of random-matrix-theory correlation functions.« less
A Note on Parameters of Random Substitutions by γ-Diagonal Matrices
NASA Astrophysics Data System (ADS)
Kang, Ju-Sung
Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
Not all that glitters is RMT in the forecasting of risk of portfolios in the Brazilian stock market
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas; Bortoluzzo, Adriana Bruscato; Venezuela, Maria Kelly
2014-09-01
Using stocks of the Brazilian stock exchange (BM&F-Bovespa), we build portfolios of stocks based on Markowitz's theory and test the predicted and realized risks. This is done using the correlation matrices between stocks, and also using Random Matrix Theory in order to clean such correlation matrices from noise. We also calculate correlation matrices using a regression model in order to remove the effect of common market movements and their cleaned versions using Random Matrix Theory. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2012. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so. The results may be used in the assessment of the true risks when one builds a portfolio of stocks during periods of crisis.
Goonesekere, Nalin Cw
2009-01-01
The large numbers of protein sequences generated by whole genome sequencing projects require rapid and accurate methods of annotation. The detection of homology through computational sequence analysis is a powerful tool in determining the complex evolutionary and functional relationships that exist between proteins. Homology search algorithms employ amino acid substitution matrices to detect similarity between proteins sequences. The substitution matrices in common use today are constructed using sequences aligned without reference to protein structure. Here we present amino acid substitution matrices constructed from the alignment of a large number of protein domain structures from the structural classification of proteins (SCOP) database. We show that when incorporated into the homology search algorithms BLAST and PSI-blast, the structure-based substitution matrices enhance the efficacy of detecting remote homologs.
Novel sustained-release dosage forms of proteins using polyglycerol esters of fatty acids.
Yamagata, Y; Iga, K; Ogawa, Y
2000-02-03
In order to develop a novel delivery system for proteins based on polyglycerol esters of fatty acids (PGEFs), we studied a model system using interferon-alpha (IFN-alpha) as the test protein. A cylindrical matrix was prepared by a heat extrusion technique using a lyophilized powder of the protein and 11 different types of synthetic PGEFs, which varied in degree of glycerol polymerization (di- and tetra-), chain length of fatty acids (myristate, palmitate and stearate) and degree of fatty acid esterification (mono-, di- and tri-). In an in-vitro release study using an enzyme-linked immunosorbent assay (ELISA) as a detection method, the matrices prepared from a monoglyceride (used for comparison) and from diglycerol esters exhibited a biphasic release pattern with a large initial burst followed by slow release. In contrast, the matrices prepared from tetraglycerol esters showed a steady rate of release without a large initial burst. In an in vivo release study, initial bursts of IFN-alpha release were, also, dramatically reduced when the matrices were prepared from the tetraglycerol esters of palmitate and stearate, and the mean residence time (MRT) of IFN-alpha was prolonged, whereas the matrices prepared from monoglyceride and from diglycerol esters showed large initial bursts of IFN-alpha release. Since the release rates from the matrices prepared from the tetraglycerol esters of palmitate and stearate were governed by Jander's equation modified for a cylindrical matrix, the release from those matrices was concluded to be a diffusion-controlled process. The bioavailability of IFN-alpha after implantation of the matrix formulation prepared using all types of PGEFs, except for tetraglycerol triesters, was almost equivalent to that after injection of IFN-alpha solution; consequently, IFN-alpha in these matrices appears to remain stable during the release period.
Asymmetric correlation matrices: an analysis of financial data
NASA Astrophysics Data System (ADS)
Livan, G.; Rebecchi, L.
2012-06-01
We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
Weighted network analysis of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Iori, Giulia; Precup, Ovidiu V.
2007-03-01
In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Complexity Characteristics of Currency Networks
NASA Astrophysics Data System (ADS)
Gorski, A. Z.; Drozdz, S.; Kwapien, J.; Oswiecimka, P.
2006-11-01
A large set of daily FOREX time series is analyzed. The corresponding correlation matrices (CM) are constructed for USD, EUR and PLN used as the base currencies. The triangle rule is interpreted as constraints reducing the number of independent returns. The CM spectrum is computed and compared with the cases of shuffled currencies and a fictitious random currency taken as a base currency. The Minimal Spanning Tree (MST) graphs are calculated and the clustering effects for strong currencies are found. It is shown that for MSTs the node rank has power like, scale free behavior. Finally, the scaling exponents are evaluated and found in the range analogous to those identified recently for various complex networks.
Fidelity decay in interacting two-level boson systems: Freezing and revivals
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2011-05-01
We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.
Chaos and random matrices in supersymmetric SYK
NASA Astrophysics Data System (ADS)
Hunter-Jones, Nicholas; Liu, Junyu
2018-05-01
We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.
Applications of multiple-constraint matrix updates to the optimal control of large structures
NASA Technical Reports Server (NTRS)
Smith, S. W.; Walcott, B. L.
1992-01-01
Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.
Use of job-exposure matrices to estimate occupational exposure to pesticides: A review.
Carles, Camille; Bouvier, Ghislaine; Lebailly, Pierre; Baldi, Isabelle
2017-03-01
The health effects of pesticides have been extensively studied in epidemiology, mainly in agricultural populations. However, pesticide exposure assessment remains a key methodological issue for epidemiological studies. Besides self-reported information, expert assessment or metrology, job-exposure matrices still appear to be an interesting tool. We reviewed all existing matrices assessing occupational exposure to pesticides in epidemiological studies and described the exposure parameters they included. We identified two types of matrices, (i) generic ones that are generally used in case-control studies and document broad categories of pesticides in a large range of jobs, and (ii) specific matrices, developed for use in agricultural cohorts, that generally provide exposure metrics at the active ingredient level. The various applications of these matrices in epidemiological studies have proven that they are valuable tools to assess pesticide exposure. Specific matrices are particularly promising for use in agricultural cohorts. However, results obtained with matrices have rarely been compared with those obtained with other tools. In addition, the external validity of the given estimates has not been adequately discussed. Yet, matrices would help in reducing misclassification and in quantifying cumulated exposures, to improve knowledge about the chronic health effects of pesticides.
Thirty Years of Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
2001-01-01
Discusses relationships between a mathematical measurement model and its real-world applications. Makes a distinction between large-scale data matrices commonly found in educational measurement and smaller matrices found in attitude and personality measurement. Also evaluates nonparametric methods for estimating item response functions and…
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
NASA Astrophysics Data System (ADS)
Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan
2013-09-01
In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.
Rizvi, Mohd Suhail; Pal, Anupam
2014-09-01
The fibrous matrices are widely used as scaffolds for the regeneration of load-bearing tissues due to their structural and mechanical similarities with the fibrous components of the extracellular matrix. These scaffolds not only provide the appropriate microenvironment for the residing cells but also act as medium for the transmission of the mechanical stimuli, essential for the tissue regeneration, from macroscopic scale of the scaffolds to the microscopic scale of cells. The requirement of the mechanical loading for the tissue regeneration requires the fibrous scaffolds to be able to sustain the complex three-dimensional mechanical loading conditions. In order to gain insight into the mechanical behavior of the fibrous matrices under large amount of elongation as well as shear, a statistical model has been formulated to study the macroscopic mechanical behavior of the electrospun fibrous matrix and the transmission of the mechanical stimuli from scaffolds to the cells via the constituting fibers. The study establishes the load-deformation relationships for the fibrous matrices for different structural parameters. It also quantifies the changes in the fiber arrangement and tension generated in the fibers with the deformation of the matrix. The model reveals that the tension generated in the fibers on matrix deformation is not homogeneous and hence the cells located in different regions of the fibrous scaffold might experience different mechanical stimuli. The mechanical response of fibrous matrices was also found to be dependent on the aspect ratio of the matrix. Therefore, the model establishes a structure-mechanics interdependence of the fibrous matrices under large deformation, which can be utilized in identifying the appropriate structure and external mechanical loading conditions for the regeneration of load-bearing tissues. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multiscale Modeling of Thermal Conductivity of Polymer/Carbon Nanocomposites
NASA Technical Reports Server (NTRS)
Clancy, Thomas C.; Frankland, Sarah-Jane V.; Hinkley, Jeffrey A.; Gates, Thomas S.
2010-01-01
Molecular dynamics simulation was used to estimate the interfacial thermal (Kapitza) resistance between nanoparticles and amorphous and crystalline polymer matrices. Bulk thermal conductivities of the nanocomposites were then estimated using an established effective medium approach. To study functionalization, oligomeric ethylene-vinyl alcohol copolymers were chemically bonded to a single wall carbon nanotube. The results, in a poly(ethylene-vinyl acetate) matrix, are similar to those obtained previously for grafted linear hydrocarbon chains. To study the effect of noncovalent functionalization, two types of polyethylene matrices. -- aligned (extended-chain crystalline) vs. amorphous (random coils) were modeled. Both matrices produced the same interfacial thermal resistance values. Finally, functionalization of edges and faces of plate-like graphite nanoparticles was found to be only modestly effective in reducing the interfacial thermal resistance and improving the composite thermal conductivity
Lyapunov exponents for one-dimensional aperiodic photonic bandgap structures
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2011-10-01
Existing in the "gray area" between perfectly periodic and purely randomized photonic bandgap structures are the socalled aperoidic structures whose layers are chosen according to some deterministic rule. We consider here a onedimensional photonic bandgap structure, a quarter-wave stack, with the layer thickness of one of the bilayers subject to being either thin or thick according to five deterministic sequence rules and binary random selection. To produce these aperiodic structures we examine the following sequences: Fibonacci, Thue-Morse, Period doubling, Rudin-Shapiro, as well as the triadic Cantor sequence. We model these structures numerically with a long chain (approximately 5,000,000) of transfer matrices, and then use the reliable algorithm of Wolf to calculate the (upper) Lyapunov exponent for the long product of matrices. The Lyapunov exponent is the statistically well-behaved variable used to characterize the Anderson localization effect (exponential confinement) when the layers are randomized, so its calculation allows us to more precisely compare the purely randomized structure with its aperiodic counterparts. It is found that the aperiodic photonic systems show much fine structure in their Lyapunov exponents as a function of frequency, and, in a number of cases, the exponents are quite obviously fractal.
A Higher Order Iterative Method for Computing the Drazin Inverse
Soleymani, F.; Stanimirović, Predrag S.
2013-01-01
A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
The algebraic theory of latent projectors in lambda matrices
NASA Technical Reports Server (NTRS)
Denman, E. D.; Leyva-Ramos, J.; Jeon, G. J.
1981-01-01
Multivariable systems such as a finite-element model of vibrating structures, control systems, and large-scale systems are often formulated in terms of differential equations which give rise to lambda matrices. The present investigation is concerned with the formulation of the algebraic theory of lambda matrices and the relationship of latent roots, latent vectors, and latent projectors to the eigenvalues, eigenvectors, and eigenprojectors of the companion form. The chain rule for latent projectors and eigenprojectors for the repeated latent root or eigenvalues is given.
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
A time-series approach to dynamical systems from classical and quantum worlds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fossion, Ruben
2014-01-08
This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate time series.
Dynamical Localization for Unitary Anderson Models
NASA Astrophysics Data System (ADS)
Hamza, Eman; Joye, Alain; Stolz, Günter
2009-11-01
This paper establishes dynamical localization properties of certain families of unitary random operators on the d-dimensional lattice in various regimes. These operators are generalizations of one-dimensional physical models of quantum transport and draw their name from the analogy with the discrete Anderson model of solid state physics. They consist in a product of a deterministic unitary operator and a random unitary operator. The deterministic operator has a band structure, is absolutely continuous and plays the role of the discrete Laplacian. The random operator is diagonal with elements given by i.i.d. random phases distributed according to some absolutely continuous measure and plays the role of the random potential. In dimension one, these operators belong to the family of CMV-matrices in the theory of orthogonal polynomials on the unit circle. We implement the method of Aizenman-Molchanov to prove exponential decay of the fractional moments of the Green function for the unitary Anderson model in the following three regimes: In any dimension, throughout the spectrum at large disorder and near the band edges at arbitrary disorder and, in dimension one, throughout the spectrum at arbitrary disorder. We also prove that exponential decay of fractional moments of the Green function implies dynamical localization, which in turn implies spectral localization. These results complete the analogy with the self-adjoint case where dynamical localization is known to be true in the same three regimes.
Fast Kalman Filter for Random Walk Forecast model
NASA Astrophysics Data System (ADS)
Saibaba, A.; Kitanidis, P. K.
2013-12-01
Kalman filtering is a fundamental tool in statistical time series analysis to understand the dynamics of large systems for which limited, noisy observations are available. However, standard implementations of the Kalman filter are prohibitive because they require O(N^2) in memory and O(N^3) in computational cost, where N is the dimension of the state variable. In this work, we focus our attention on the Random walk forecast model which assumes the state transition matrix to be the identity matrix. This model is frequently adopted when the data is acquired at a timescale that is faster than the dynamics of the state variables and there is considerable uncertainty as to the physics governing the state evolution. We derive an efficient representation for the a priori and a posteriori estimate covariance matrices as a weighted sum of two contributions - the process noise covariance matrix and a low rank term which contains eigenvectors from a generalized eigenvalue problem, which combines information from the noise covariance matrix and the data. We describe an efficient algorithm to update the weights of the above terms and the computation of eigenmodes of the generalized eigenvalue problem (GEP). The resulting algorithm for the Kalman filter with Random walk forecast model scales as O(N) or O(N log N), both in memory and computational cost. This opens up the possibility of real-time adaptive experimental design and optimal control in systems of much larger dimension than was previously feasible. For a small number of measurements (~ 300 - 400), this procedure can be made numerically exact. However, as the number of measurements increase, for several choices of measurement operators and noise covariance matrices, the spectrum of the (GEP) decays rapidly and we are justified in only retaining the dominant eigenmodes. We discuss tradeoffs between accuracy and computational cost. The resulting algorithms are applied to an example application from ray-based travel time tomography.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Pineda-Vadillo, Carlos; Nau, Françoise; Guerin-Dubiard, Catherin; Jardin, Julien; Lechevalier, Valérie; Sanz-Buenhombre, Marisa; Guadarrama, Alberto; Tóth, Tamás; Csavajda, Éva; Hingyi, Hajnalka; Karakaya, Sibel; Sibakov, Juhani; Capozzi, Francesco; Bordoni, Alessandra; Dupont, Didier
2017-01-01
The aim of the present study was to understand to what extent the inclusion of anthocyanins into dairy and egg matrices could affect their stability after processing and their release and solubility during digestion. For this purpose, individual and total anthocyanin content of four different enriched matrices, namely custard dessert, milkshake, pancake and omelettete, was determined after their manufacturing and during in vitro digestion. Results showed that anthocyanin recovery after processing largely varied among matrices, mainly due to the treatments applied and the interactions developed with other food components. In terms of digestion, the present study showed that the inclusion of anthocyanins into food matrices could be an effective way to protect them against intestinal degradation, and also the incorporation of anthocyanins into matrices with different compositions and structures could represent an interesting and effective method to control the delivery of anthocyanins within the different compartments of the digestive tract. Copyright © 2016 Elsevier Ltd. All rights reserved.
Siren, J; Ovaskainen, O; Merilä, J
2017-10-01
The genetic variance-covariance matrix (G) is a quantity of central importance in evolutionary biology due to its influence on the rate and direction of multivariate evolution. However, the predictive power of empirically estimated G-matrices is limited for two reasons. First, phenotypes are high-dimensional, whereas traditional statistical methods are tuned to estimate and analyse low-dimensional matrices. Second, the stability of G to environmental effects and over time remains poorly understood. Using Bayesian sparse factor analysis (BSFG) designed to estimate high-dimensional G-matrices, we analysed levels variation and covariation in 10,527 expressed genes in a large (n = 563) half-sib breeding design of three-spined sticklebacks subject to two temperature treatments. We found significant differences in the structure of G between the treatments: heritabilities and evolvabilities were higher in the warm than in the low-temperature treatment, suggesting more and faster opportunity to evolve in warm (stressful) conditions. Furthermore, comparison of G and its phenotypic equivalent P revealed the latter is a poor substitute of the former. Most strikingly, the results suggest that the expected impact of G on evolvability-as well as the similarity among G-matrices-may depend strongly on the number of traits included into analyses. In our results, the inclusion of only few traits in the analyses leads to underestimation in the differences between the G-matrices and their predicted impacts on evolution. While the results highlight the challenges involved in estimating G, they also illustrate that by enabling the estimation of large G-matrices, the BSFG method can improve predicted evolutionary responses to selection. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
Kyrpychova, Liubov; Carr, Richard A; Martinek, Petr; Vanecek, Tomas; Perret, Raul; Chottová-Dvořáková, Magdalena; Zamecnik, Michal; Hadravsky, Ladislav; Michal, Michal; Kazakov, Dmitry V
2017-06-01
Basal cell carcinoma (BCC) with matrical differentiation is a fairly rare neoplasm, with about 30 cases documented mainly as isolated case reports. We studied a series of this neoplasm, including cases with an atypical matrical component, a hitherto unreported feature. Lesions coded as BCC with matrical differentiation were reviewed; 22 cases were included. Immunohistochemical studies were performed using antibodies against BerEp4, β-catenin, and epithelial membrane antigen (EMA). Molecular genetic studies using Ion AmpliSeq Cancer Hotspot Panel v2 by massively parallel sequencing on Ion Torrent PGM were performed in 2 cases with an atypical matrical component (1 was previously subjected to microdissection to sample the matrical and BCC areas separately). There were 13 male and 9 female patients, ranging in age from 41 to 89 years. Microscopically, all lesions manifested at least 2 components, a BCC area (follicular germinative differentiation) and areas with matrical differentiation. A BCC component dominated in 14 cases, whereas a matrical component dominated in 4 cases. Matrical differentiation was recognized as matrical/supramatrical cells (n=21), shadow cells (n=21), bright red trichohyaline granules (n=18), and blue-gray corneocytes (n=18). In 2 cases, matrical areas manifested cytologic atypia, and a third case exhibited an infiltrative growth pattern, with the tumor metastasizing to a lymph node. BerEP4 labeled the follicular germinative cells, whereas it was markedly reduced or negative in matrical areas. The reverse pattern was seen with β-catenin. EMA was negative in BCC areas but stained a proportion of matrical/supramatrical cells. Genetic studies revealed mutations of the following genes: CTNNB1, KIT, CDKN2A, TP53, SMAD4, ERBB4, and PTCH1, with some differences between the matrical and BCC components. It is concluded that matrical differentiation in BCC in most cases occurs as multiple foci. Rare neoplasms manifest atypia in the matrical areas. Immunohistochemical analysis for BerEP4, EMA, and β-catenin can be helpful in limited biopsy specimens. From a molecular biological prospective, BCC and matrical components appear to share some of the gene mutations but have differences in others, but this observation must be validated in a large series.
NASA Astrophysics Data System (ADS)
Kravvaritis, Christos; Mitrouli, Marilena
2009-02-01
This paper studies the possibility to calculate efficiently compounds of real matrices which have a special form or structure. The usefulness of such an effort lies in the fact that the computation of compound matrices, which is generally noneffective due to its high complexity, is encountered in several applications. A new approach for computing the Singular Value Decompositions (SVD's) of the compounds of a matrix is proposed by establishing the equality (up to a permutation) between the compounds of the SVD of a matrix and the SVD's of the compounds of the matrix. The superiority of the new idea over the standard method is demonstrated. Similar approaches with some limitations can be adopted for other matrix factorizations, too. Furthermore, formulas for the n - 1 compounds of Hadamard matrices are derived, which dodge the strenuous computations of the respective numerous large determinants. Finally, a combinatorial counting technique for finding the compounds of diagonal matrices is illustrated.
Three-dimensional polarization algebra.
R Sheppard, Colin J; Castello, Marco; Diaspro, Alberto
2016-10-01
If light is focused or collected with a high numerical aperture lens, as may occur in imaging and optical encryption applications, polarization should be considered in three dimensions (3D). The matrix algebra of polarization behavior in 3D is discussed. It is useful to convert between the Mueller matrix and two different Hermitian matrices, representing an optical material or system, which are in the literature. Explicit transformation matrices for converting the column vector form of these different matrices are extended to the 3D case, where they are large (81×81) but can be generated using simple rules. It is found that there is some advantage in using a generalization of the Chandrasekhar phase matrix treatment, rather than that based on Gell-Mann matrices, as the resultant matrices are of simpler form and reduce to the two-dimensional case more easily. Explicit expressions are given for 3D complex field components in terms of Chandrasekhar-Stokes parameters.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta
2017-06-01
In this paper, we propose a new technique for double image encryption in the Fresnel domain using wavelet transform (WT), gyrator transform (GT) and spiral phase masks (SPMs). The two input mages are first phase encoded and each of them are then multiplied with SPMs and Fresnel propagated with distances d1 and d2, respectively. The single-level discrete WT is applied to Fresnel propagated complex images to decompose each into sub-band matrices i.e. LL, HL, LH and HH. Further, the sub-band matrices of two complex images are interchanged after modulation with random phase masks (RPMs) and subjected to inverse discrete WT. The resulting images are then both added and subtracted to get intermediate images which are further Fresnel propagated with distances d3 and d4, respectively. These outputs are finally gyrator transformed with the same angle α to get the encrypted images. The proposed technique provides enhanced security in terms of a large set of security keys. The sensitivity of security keys such as SPM parameters, GT angle α, Fresnel propagation distances are investigated. The robustness of the proposed techniques against noise and occlusion attacks are also analysed. The numerical simulation results are shown in support of the validity and effectiveness of the proposed technique.
A transfer matrix approach to vibration localization in mistuned blade assemblies
NASA Technical Reports Server (NTRS)
Ottarson, Gisli; Pierre, Chritophe
1993-01-01
A study of mode localization in mistuned bladed disks is performed using transfer matrices. The transfer matrix approach yields the free response of a general, mono-coupled, perfectly cyclic assembly in closed form. A mistuned structure is represented by random transfer matrices, and the expansion of these matrices in terms of the small mistuning parameter leads to the definition of a measure of sensitivity to mistuning. An approximation of the localization factor, the spatially averaged rate of exponential attenuation per blade-disk sector, is obtained through perturbation techniques in the limits of high and low sensitivity. The methodology is applied to a common model of a bladed disk and the results verified by Monte Carlo simulations. The easily calculated sensitivity measure may prove to be a valuable design tool due to its system-independent quantification of mistuning effects such as mode localization.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Lelental, Natalia; Brandner, Sebastian; Kofanova, Olga; Blennow, Kaj; Zetterberg, Henrik; Andreasson, Ulf; Engelborghs, Sebastiaan; Mroczko, Barbara; Gabryelewicz, Tomasz; Teunissen, Charlotte; Mollenhauer, Brit; Parnetti, Lucilla; Chiasserini, Davide; Molinuevo, Jose Luis; Perret-Liaudet, Armand; Verbeek, Marcel M; Andreasen, Niels; Brosseron, Frederic; Bahl, Justyna M C; Herukka, Sanna-Kaisa; Hausner, Lucrezia; Frölich, Lutz; Labonte, Anne; Poirier, Judes; Miller, Anne-Marie; Zilka, Norbert; Kovacech, Branislav; Urbani, Andrea; Suardi, Silvia; Oliveira, Catarina; Baldeiras, Ines; Dubois, Bruno; Rot, Uros; Lehmann, Sylvain; Skinningsrud, Anders; Betsou, Fay; Wiltfang, Jens; Gkatzima, Olymbia; Winblad, Bengt; Buchfelder, Michael; Kornhuber, Johannes; Lewczuk, Piotr
2016-03-01
Assay-vendor independent quality control (QC) samples for neurochemical dementia diagnostics (NDD) biomarkers are so far commercially unavailable. This requires that NDD laboratories prepare their own QC samples, for example by pooling leftover cerebrospinal fluid (CSF) samples. To prepare and test alternative matrices for QC samples that could facilitate intra- and inter-laboratory QC of the NDD biomarkers. Three matrices were validated in this study: (A) human pooled CSF, (B) Aβ peptides spiked into human prediluted plasma, and (C) Aβ peptides spiked into solution of bovine serum albumin in phosphate-buffered saline. All matrices were tested also after supplementation with an antibacterial agent (sodium azide). We analyzed short- and long-term stability of the biomarkers with ELISA and chemiluminescence (Fujirebio Europe, MSD, IBL International), and performed an inter-laboratory variability study. NDD biomarkers turned out to be stable in almost all samples stored at the tested conditions for up to 14 days as well as in samples stored deep-frozen (at - 80°C) for up to one year. Sodium azide did not influence biomarker stability. Inter-center variability of the samples sent at room temperature (pooled CSF, freeze-dried CSF, and four artificial matrices) was comparable to the results obtained on deep-frozen samples in other large-scale projects. Our results suggest that it is possible to replace self-made, CSF-based QC samples with large-scale volumes of QC materials prepared with artificial peptides and matrices. This would greatly facilitate intra- and inter-laboratory QC schedules for NDD measurements.
Tek, Cenk; Palmese, Laura B; Krystal, Andrew D; Srihari, Vinod H; DeGeorge, Pamela C; Reutenauer, Erin L; Guloksuz, Sinan
2014-12-01
Insomnia is frequent in schizophrenia and may contribute to cognitive impairment as well as overuse of weight inducing sedative antipsychotics. We investigated the effects of eszopiclone on sleep and cognition for patients with schizophrenia-related insomnia in a double-blind placebo controlled study, followed by a two-week, single-blind placebo phase. Thirty-nine clinically stable outpatients with schizophrenia or schizoaffective disorder and insomnia were randomized to either 3mg eszopiclone (n=20) or placebo (n=19). Primary outcome measure was change in Insomnia Severity Index (ISI) over 8 weeks. Secondary outcome measure was change in MATRICS Consensus Cognitive Battery (MATRICS). Sleep diaries, psychiatric symptoms, and quality of life were also monitored. ISI significantly improved more in eszopiclone (mean=-10.7, 95% CI=-13.2; -8.2) than in placebo (mean=-6.9, 95% CI=-9.5; -4.3) with a between-group difference of -3.8 (95% CI=-7.5; -0.2). MATRICS score change did not differ between groups. On further analysis there was a significant improvement in the working memory test, letter-number span component of MATRICS (mean=9.8±9.2, z=-2.00, p=0.045) only for subjects with schizophrenia on eszopiclone. There were improvements in sleep diary items in both groups with no between-group differences. Psychiatric symptoms remained stable. Discontinuation rates were similar. Sleep remained improved during single-blind placebo phase after eszopiclone was stopped, but the working memory improvement in patients with schizophrenia was not durable. Eszopiclone stands as a safe and effective alternative for the treatment of insomnia in patients with schizophrenia. Its effects on cognition require further study. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Moraleda, Joaquín; Segurado, Javier; LLorca, Javier
2009-09-01
The in-plane finite deformation of incompressible fiber-reinforced elastomers was studied using computational micromechanics. Composite microstructure was made up of a random and homogeneous dispersion of aligned rigid fibers within a hyperelastic matrix. Different matrices (Neo-Hookean and Gent), fibers (monodisperse or polydisperse, circular or elliptical section) and reinforcement volume fractions (10-40%) were analyzed through the finite element simulation of a representative volume element of the microstructure. A successive remeshing strategy was employed when necessary to reach the large deformation regime in which the evolution of the microstructure influences the effective properties. The simulations provided for the first time "quasi-exact" results of the in-plane finite deformation for this class of composites, which were used to assess the accuracy of the available homogenization estimates for incompressible hyperelastic composites.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.
The factor structure of the Alcohol Use Disorders Identification Test (AUDIT).
Doyle, Suzanne R; Donovan, Dennis M; Kivlahan, Daniel R
2007-05-01
Past research assessing the factor structure of the Alcohol Use Disorders Identification Test (AUDIT) with various exploratory and confirmatory factor analytic techniques has identified one-, two-, and three-factor solutions. Because different factor analytic procedures may result in dissimilar findings, we examined the factor structure of the AUDIT using the same factor analytic technique on two new large clinical samples and on archival data from six samples studied in previous reports. Responses to the AUDIT were obtained from participants who met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), criteria for alcohol dependence in two large randomized clinical trials: the COMBINE (Combining Medications and Behavioral Interventions) Study (N = 1,337; 69% men) and Project MATCH (Matching Alcoholism Treatments to Client Heterogeneity; N = 1,711; 76% men). Supplementary analyses involved six correlation matrices of AUDIT data obtained from five previously published articles. Confirmatory factor analyses based on one-, two-, and three-factor models were conducted on the eight correlation matrices to assess the factor structure of the AUDIT. Across samples, analyses supported a correlated, two-factor solution representing alcohol consumption and alcohol-related consequences. The three-factor solution fit the data equally well, but two factors (alcohol dependence and harmful alcohol use) were highly correlated. The one-factor solution did not provide a good fit to the data. These findings support a two-factor solution for the AUDIT (alcohol consumption and alcohol-related consequences). The results contradict the original three-factor design of the AUDIT and the prevalent use of the AUDIT as a one-factor screening instrument with a single cutoff score.
Average fidelity between random quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5
2005-03-01
We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yurkin, Maxim A.
2017-01-01
Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Structures and textures of the Murchison and Mighei carbonaceous chondrite matrices
NASA Technical Reports Server (NTRS)
Mackinnon, I. D. R.
1980-01-01
High-resolution transmission electron microscopy has confirmed earlier observations that the character of the Murchison and Mighei fine-grained matrices is complex in mineralogy and texture. Layer structure minerals occur as planar laths, rounded grains or subhedral grains, and range in size from less than 100 A to about 1 micrometer. Serpentine-type and brucite-type structures predominate in the CM matrices. The occurrence of Povlen chrysolite and a vein of disordered mixed-layer and brucite-type material cutting a large lizardite-type grain suggests that at least some of the matrix materials were formed by alteration of preexisting material.
Efficient similarity-based data clustering by optimal object to cluster reallocation.
Rossignol, Mathias; Lagrange, Mathieu; Cont, Arshia
2018-01-01
We present an iterative flat hard clustering algorithm designed to operate on arbitrary similarity matrices, with the only constraint that these matrices be symmetrical. Although functionally very close to kernel k-means, our proposal performs a maximization of average intra-class similarity, instead of a squared distance minimization, in order to remain closer to the semantics of similarities. We show that this approach permits the relaxing of some conditions on usable affinity matrices like semi-positiveness, as well as opening possibilities for computational optimization required for large datasets. Systematic evaluation on a variety of data sets shows that compared with kernel k-means and the spectral clustering methods, the proposed approach gives equivalent or better performance, while running much faster. Most notably, it significantly reduces memory access, which makes it a good choice for large data collections. Material enabling the reproducibility of the results is made available online.
On the number of Bose-selected modes in driven-dissipative ideal Bose gases
NASA Astrophysics Data System (ADS)
Schnell, Alexander; Ketzmerick, Roland; Eckardt, André
2018-03-01
In an ideal Bose gas that is driven into a steady state far from thermal equilibrium, a generalized form of Bose condensation can occur. Namely, the single-particle states unambiguously separate into two groups: the group of Bose-selected states, whose occupations increase linearly with the total particle number, and the group of all other states whose occupations saturate [Phys. Rev. Lett. 111, 240405 (2013), 10.1103/PhysRevLett.111.240405]. However, so far very little is known about how the number of Bose-selected states depends on the properties of the system and its coupling to the environment. The answer to this question is crucial since systems hosting a single, a few, or an extensive number of Bose-selected states will show rather different behavior. While in the former two scenarios each selected mode acquires a macroscopic occupation, corresponding to (fragmented) Bose condensation, the latter case rather bears resemblance to a high-temperature state of matter. In this paper, we systematically investigate the number of Bose-selected states, considering different classes of the rate matrices that characterize the driven-dissipative ideal Bose gases in the limit of weak system-bath coupling. These include rate matrices with continuum limit, rate matrices of chaotic driven systems, random rate matrices, and rate matrices resulting from thermal baths that couple to a few observables only.
On the number of Bose-selected modes in driven-dissipative ideal Bose gases.
Schnell, Alexander; Ketzmerick, Roland; Eckardt, André
2018-03-01
In an ideal Bose gas that is driven into a steady state far from thermal equilibrium, a generalized form of Bose condensation can occur. Namely, the single-particle states unambiguously separate into two groups: the group of Bose-selected states, whose occupations increase linearly with the total particle number, and the group of all other states whose occupations saturate [Phys. Rev. Lett. 111, 240405 (2013)PRLTAO0031-900710.1103/PhysRevLett.111.240405]. However, so far very little is known about how the number of Bose-selected states depends on the properties of the system and its coupling to the environment. The answer to this question is crucial since systems hosting a single, a few, or an extensive number of Bose-selected states will show rather different behavior. While in the former two scenarios each selected mode acquires a macroscopic occupation, corresponding to (fragmented) Bose condensation, the latter case rather bears resemblance to a high-temperature state of matter. In this paper, we systematically investigate the number of Bose-selected states, considering different classes of the rate matrices that characterize the driven-dissipative ideal Bose gases in the limit of weak system-bath coupling. These include rate matrices with continuum limit, rate matrices of chaotic driven systems, random rate matrices, and rate matrices resulting from thermal baths that couple to a few observables only.
ERIC Educational Resources Information Center
Balboni, Giulia; Naglieri, Jack A.; Cubelli, Roberto
2010-01-01
The concurrent and predictive validities of the Naglieri Nonverbal Ability Test (NNAT) and Raven's Colored Progressive Matrices (CPM) were investigated in a large group of Italian third-and fifth-grade students with different sociocultural levels evaluated at the beginning and end of the school year. CPM and NNAT scores were related to math and…
Towards rigorous analysis of the Levitov-Mirlin-Evers recursion
NASA Astrophysics Data System (ADS)
Fyodorov, Y. V.; Kupiainen, A.; Webb, C.
2016-12-01
This paper aims to develop a rigorous asymptotic analysis of an approximate renormalization group recursion for inverse participation ratios P q of critical powerlaw random band matrices. The recursion goes back to the work by Mirlin and Evers (2000 Phys. Rev. B 62 7920) and earlier works by Levitov (1990 Phys. Rev. Lett. 64 547, 1999 Ann. Phys. 8 697-706) and is aimed to describe the ensuing multifractality of the eigenvectors of such matrices. We point out both similarities and dissimilarities between the LME recursion and those appearing in the theory of multiplicative cascades and branching random walks and show that the methods developed in those fields can be adapted to the present case. In particular the LME recursion is shown to exhibit a phase transition, which we expect is a freezing transition, where the role of temperature is played by the exponent q. However, the LME recursion has features that make its rigorous analysis considerably harder and we point out several open problems for further study.
Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling
NASA Astrophysics Data System (ADS)
Čunderlík, Róbert; Vipiana, Francesca
2017-04-01
Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.
Explicit Lower and Upper Bounds on the Entangled Value of Multiplayer XOR Games
NASA Astrophysics Data System (ADS)
Briët, Jop; Vidick, Thomas
2013-07-01
The study of quantum-mechanical violations of Bell inequalities is motivated by the investigation, and the eventual demonstration, of the nonlocal properties of entanglement. In recent years, Bell inequalities have found a fruitful re-formulation using the language of multiplayer games originating from Computer Science. This paper studies the nonlocal properties of entanglement in the context of the simplest such games, called XOR games. When there are two players, it is well known that the maximum bias—the advantage over random play—of players using entanglement can be at most a constant times greater than that of classical players. Recently, Pérez-García et al. (Commun. Mathe. Phys. 279:455, 2008) showed that no such bound holds when there are three or more players: the use of entanglement can provide an unbounded advantage, and scale with the number of questions in the game. Their proof relies on non-trivial results from operator space theory, and gives a non-explicit existence proof, leading to a game with a very large number of questions and only a loose control over the local dimension of the players' shared entanglement. We give a new, simple and explicit (though still probabilistic) construction of a family of three-player XOR games which achieve a large quantum-classical gap (QC-gap). This QC-gap is exponentially larger than the one given by Pérez-García et. al. in terms of the size of the game, achieving a QC-gap of order {√{N}} with N 2 questions per player. In terms of the dimension of the entangled state required, we achieve the same (optimal) QC-gap of {√{N}} for a state of local dimension N per player. Moreover, the optimal entangled strategy is very simple, involving observables defined by tensor products of the Pauli matrices. Additionally, we give the first upper bound on the maximal QC-gap in terms of the number of questions per player, showing that our construction is only quadratically off in that respect. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors which may be of independent interest.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
Donoho, David L; Gavish, Matan; Montanari, Andrea
2013-05-21
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
A scattering model for forested area
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1988-01-01
A forested area is modeled as a volume of randomly oriented and distributed disc-shaped, or needle-shaped leaves shading a distribution of branches modeled as randomly oriented finite-length, dielectric cylinders above an irregular soil surface. Since the radii of branches have a wide range of sizes, the model only requires the length of a branch to be large compared with its radius which may be any size relative to the incident wavelength. In addition, the model also assumes the thickness of a disc-shaped leaf or the radius of a needle-shaped leaf is much smaller than the electromagnetic wavelength. The scattering phase matrices for disc, needle, and cylinder are developed in terms of the scattering amplitudes of the corresponding fields which are computed by the forward scattering theorem. These quantities along with the Kirchoff scattering model for a randomly rough surface are used in the standard radiative transfer formulation to compute the backscattering coefficient. Numerical illustrations for the backscattering coefficient are given as a function of the shading factor, incidence angle, leaf orientation distribution, branch orientation distribution, and the number density of leaves. Also illustrated are the properties of the extinction coefficient as a function of leaf and branch orientation distributions. Comparisons are made with measured backscattering coefficients from forested areas reported in the literature.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Effect of Polydispersity on Diffusion in Random Obstacle Matrices
NASA Astrophysics Data System (ADS)
Cho, Hyun Woo; Kwon, Gyemin; Sung, Bong June; Yethiraj, Arun
2012-10-01
The dynamics of tracers in disordered matrices is of interest in a number of diverse areas of physics such as the biophysics of crowding in cells and cell membranes, and the diffusion of fluids in porous media. To a good approximation the matrices can be modeled as a collection of spatially frozen particles. In this Letter, we consider the effect of polydispersity (in size) of the matrix particles on the dynamics of tracers. We study a two dimensional system of hard disks diffusing in a sea of hard disk obstacles, for different values of the polydispersity of the matrix. We find that for a given average size and area fraction, the diffusion of tracers is very sensitive to the polydispersity. We calculate the pore percolation threshold using Apollonius diagrams. The diffusion constant, D, follows a scaling relation D˜(ϕc-ϕm)μ-β for all values of the polydispersity, where ϕm is the area fraction and ϕc is the value of ϕm at the percolation threshold.
Effect of polydispersity on diffusion in random obstacle matrices.
Cho, Hyun Woo; Kwon, Gyemin; Sung, Bong June; Yethiraj, Arun
2012-10-12
The dynamics of tracers in disordered matrices is of interest in a number of diverse areas of physics such as the biophysics of crowding in cells and cell membranes, and the diffusion of fluids in porous media. To a good approximation the matrices can be modeled as a collection of spatially frozen particles. In this Letter, we consider the effect of polydispersity (in size) of the matrix particles on the dynamics of tracers. We study a two dimensional system of hard disks diffusing in a sea of hard disk obstacles, for different values of the polydispersity of the matrix. We find that for a given average size and area fraction, the diffusion of tracers is very sensitive to the polydispersity. We calculate the pore percolation threshold using Apollonius diagrams. The diffusion constant, D, follows a scaling relation D~(φ(c)-φ(m))(μ-β) for all values of the polydispersity, where φ(m) is the area fraction and φ(c) is the value of φ(m) at the percolation threshold.
Random harmonic analysis program, L221 (TEV156). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Miller, R. D.; Graham, M. L.
1979-01-01
A digital computer program capable of calculating steady state solutions for linear second order differential equations due to sinusoidal forcing functions is described. The field of application of the program, the analysis of airplane response and loads due to continuous random air turbulence, is discussed. Optional capabilities including frequency dependent input matrices, feedback damping, gradual gust penetration, multiple excitation forcing functions, and a static elastic solution are described. Program usage and a description of the analysis used are presented.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Broken Ergodicity in MHD Turbulence in a Spherical Domain
NASA Technical Reports Server (NTRS)
Shebalin, John V.; wang, Yifan
2011-01-01
Broken ergodicity (BE) occurs in Fourier method numerical simulations of ideal, homogeneous, incompressible magnetohydrodynamic (MHD) turbulence. Although naive statistical theory predicts that Fourier coefficients of fluid velocity and magnetic field are zero-mean random variables, numerical simulations clearly show that low-wave-number coefficients have non-zero mean values that can be very large compared to the associated standard deviation. In other words, large-scale coherent structure (i.e., broken ergodicity) in homogeneous MHD turbulence can spontaneously grow out of random initial conditions. Eigenanalysis of the modal covariance matrices in the probability density functions of ideal statistical theory leads to a theoretical explanation of observed BE in homogeneous MHD turbulence. Since dissipation is minimal at the largest scales, BE is also relevant for resistive magnetofluids, as evidenced in numerical simulations. Here, we move beyond model magnetofluids confined by periodic boxes to examine BE in rotating magnetofluids in spherical domains using spherical harmonic expansions along with suitable boundary conditions. We present theoretical results for 3-D and 2-D spherical models and also present computational results from dynamical simulations of 2-D MHD turbulence on a rotating spherical surface. MHD turbulence on a 2-D sphere is affected by Coriolus forces, while MHD turbulence on a 2-D plane is not, so that 2-D spherical models are a useful (and simpler) intermediate stage on the path to understanding the much more complex 3-D spherical case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galavis, P; Friedman, K; Chandarana, H
Purpose: Radiomics involves the extraction of texture features from different imaging modalities with the purpose of developing models to predict patient treatment outcomes. The purpose of this study is to investigate texture feature reproducibility across [18F]FDG PET/CT and [18F]FDG PET/MR imaging in patients with primary malignancies. Methods: Twenty five prospective patients with solid tumors underwent clinical [18F]FDG PET/CT scan followed by [18F]FDG PET/MR scans. In all patients the lesions were identified using nuclear medicine reports. The images were co-registered and segmented using an in-house auto-segmentation method. Fifty features, based on the intensity histogram, second and high order matrices, were extractedmore » from the segmented regions from both image data sets. One-way random-effects ANOVA model of the intra-class correlation coefficient (ICC) was used to establish texture feature correlations between both data sets. Results: Fifty features were classified based on their ICC values, which were found in the range from 0.1 to 0.86, in three categories: high, intermediate, and low. Ten features extracted from second and high-order matrices showed large ICC ≥ 0.70. Seventeen features presented intermediate 0.5 ≤ ICC ≤ 0.65 and the remaining twenty three presented low ICC ≤ 0.45. Conclusion: Features with large ICC values could be reliable candidates for quantification as they lead to similar results from both imaging modalities. Features with small ICC indicates a lack of correlation. Therefore, the use of these features as a quantitative measure will lead to different assessments of the same lesion depending on the imaging modality from where they are extracted. This study shows the importance of the need for further investigation and standardization of features across multiple imaging modalities.« less
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Fidelity decay of the two-level bosonic embedded ensembles of random matrices
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2010-12-01
We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.
NASA Astrophysics Data System (ADS)
Matsuda, Koichi; Nishiura, Hiroyuki
2006-01-01
A phenomenological approach for the universal mass matrix model with a broken flavor 2↔3 symmetry is explored by introducing the 2↔3 antisymmetric parts of mass matrices for quarks and charged leptons. We present explicit texture components of the mass matrices, which are consistent with all the neutrino oscillation experiments and quark mixing data. The mass matrices have a common structure for quarks and leptons, while the large lepton mixings and the small quark mixings are derived with no fine-tuning due to the difference of the phase factors. The model predicts a value 2.4×10-3 for the lepton mixing matrix element square |U13|2, and also ⟨mν⟩=(0.89-1.4)×10-4eV for the averaged neutrino mass which appears in the neutrinoless double beta decay.
NASA Technical Reports Server (NTRS)
Freund, Roland
1988-01-01
Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.
Biclustering sparse binary genomic data.
van Uitert, Miranda; Meuleman, Wouter; Wessels, Lodewyk
2008-12-01
Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two algorithms have been proposed that specifically deal with binary matrices. None of the gene expression biclustering algorithms can handle the large number of zeros in sparse binary matrices. The two proposed binary algorithms failed to produce meaningful results. In this article, we present a new algorithm that is able to extract biclusters from sparse, binary datasets. A powerful feature is that biclusters with different numbers of rows and columns can be detected, varying from many rows to few columns and few rows to many columns. It allows the user to guide the search towards biclusters of specific dimensions. When applying our algorithm to an input matrix derived from TRANSFAC, we find transcription factors with distinctly dissimilar binding motifs, but a clear set of common targets that are significantly enriched for GO categories.
An efficient solver for large structured eigenvalue problems in relativistic quantum chemistry
NASA Astrophysics Data System (ADS)
Shiozaki, Toru
2017-01-01
We report an efficient program for computing the eigenvalues and symmetry-adapted eigenvectors of very large quaternionic (or Hermitian skew-Hamiltonian) matrices, using which structure-preserving diagonalisation of matrices of dimension N > 10, 000 is now routine on a single computer node. Such matrices appear frequently in relativistic quantum chemistry owing to the time-reversal symmetry. The implementation is based on a blocked version of the Paige-Van Loan algorithm, which allows us to use the Level 3 BLAS subroutines for most of the computations. Taking advantage of the symmetry, the program is faster by up to a factor of 2 than state-of-the-art implementations of complex Hermitian diagonalisation; diagonalising a 12, 800 × 12, 800 matrix took 42.8 (9.5) and 85.6 (12.6) minutes with 1 CPU core (16 CPU cores) using our symmetry-adapted solver and Intel Math Kernel Library's ZHEEV that is not structure-preserving, respectively. The source code is publicly available under the FreeBSD licence.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1978-01-01
The paper describes the split-Cholesky strategy for banded matrices arising from the large systems of equations in certain fluid mechanics problems. The basic idea is that for a banded matrix the computation can be carried out in pieces, with only a small portion of the matrix residing in core. Mesh considerations are discussed by demonstrating the manner in which the assembly of finite element equations proceeds for linear trial functions on a triangular mesh. The FORTRAN code which implements the out-of-core decomposition strategy for banded symmetric positive definite matrices (mass matrices) of a coupled initial value problem is given.
NASA Technical Reports Server (NTRS)
Salama, Farid; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
Recent studies of the spectroscopy of large (up to approx. 50 carbon atoms) neutral and Ionized polycyclic aromatic hydrocarbons (PAHs) and Fullerenes isolated in inert gas matrices will be presented. The advantages and the limitations of matrix isolation spectroscopy for the study of the molecular spectroscopy of interstellar dust analogs will be discussed. The laboratory data will be compared to the astronomical spectra (the interstellar extinction, the diffuse interstellar bands). Finally, the spectra of PAH ions isolated in neon/argon matrices will be compared to the spectra obtained for PAH ion seeded in a supersonic expansion. The astrophysical implications and future perspectives will be discussed.
TiO₂-Based Photocatalytic Geopolymers for Nitric Oxide Degradation.
Strini, Alberto; Roviello, Giuseppina; Ricciotti, Laura; Ferone, Claudio; Messina, Francesco; Schiavi, Luca; Corsaro, Davide; Cioffi, Raffaele
2016-06-24
This study presents an experimental overview for the development of photocatalytic materials based on geopolymer binders as catalyst support matrices. Particularly, geopolymer matrices obtained from different solid precursors (fly ash and metakaolin), composite systems (siloxane-hybrid, foamed hybrid), and curing temperatures (room temperature and 60 °C) were investigated for the same photocatalyst content (i.e., 3% TiO₂ by weight of paste). The geopolymer matrices were previously designed for different applications, ranging from insulating (foam) to structural materials. The photocatalytic activity was evaluated as NO degradation in air, and the results were compared with an ordinary Portland cement reference. The studied matrices demonstrated highly variable photocatalytic performance depending on both matrix constituents and the curing temperature, with promising activity revealed by the geopolymers based on fly ash and metakaolin. Furthermore, microstructural features and titania dispersion in the matrices were assessed by scanning electron microscopy (SEM) and energy dispersive X-ray (EDS) analyses. Particularly, EDS analyses of sample sections indicated segregation effects of titania in the surface layer, with consequent enhancement or depletion of the catalyst concentration in the active sample region, suggesting non-negligible transport phenomena during the curing process. The described results demonstrated that geopolymer binders can be interesting catalyst support matrices for the development of photocatalytic materials and indicated a large potential for the exploitation of their peculiar features.
Delahaie, B; Charmantier, A; Chantepie, S; Garant, D; Porlier, M; Teplitsky, C
2017-08-01
The genetic variance-covariance matrix (G-matrix) summarizes the genetic architecture of multiple traits. It has a central role in the understanding of phenotypic divergence and the quantification of the evolutionary potential of populations. Laboratory experiments have shown that G-matrices can vary rapidly under divergent selective pressures. However, because of the demanding nature of G-matrix estimation and comparison in wild populations, the extent of its spatial variability remains largely unknown. In this study, we investigate spatial variation in G-matrices for morphological and life-history traits using long-term data sets from one continental and three island populations of blue tit (Cyanistes caeruleus) that have experienced contrasting population history and selective environment. We found no evidence for differences in G-matrices among populations. Interestingly, the phenotypic variance-covariance matrices (P) were divergent across populations, suggesting that using P as a substitute for G may be inadequate. These analyses also provide the first evidence in wild populations for additive genetic variation in the incubation period (that is, the period between last egg laid and hatching) in all four populations. Altogether, our results suggest that G-matrices may be stable across populations inhabiting contrasted environments, therefore challenging the results of previous simulation studies and laboratory experiments.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Zhou, Yanling; Li, Guannan; Li, Dan; Cui, Hongmei; Ning, Yuping
2018-05-01
The long-term effects of dose reduction of atypical antipsychotics on cognitive function and symptomatology in stable patients with schizophrenia remain unclear. We sought to determine the change in cognitive function and symptomatology after reducing risperidone or olanzapine dosage in stable schizophrenic patients. Seventy-five stabilized schizophrenic patients prescribed risperidone (≥4 mg/day) or olanzapine (≥10 mg/day) were randomly divided into a dose-reduction group ( n=37) and a maintenance group ( n=38). For the dose-reduction group, the dose of antipsychotics was reduced by 50%; for the maintenance group, the dose remained unchanged throughout the whole study. The Positive and Negative Syndrome Scale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, and Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) Consensus Cognitive Battery were measured at baseline, 12, 28, and 52 weeks. Linear mixed models were performed to compare the Positive and Negative Syndrome Scale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects and MATRICS Consensus Cognitive Battery scores between groups. The linear mixed model showed significant time by group interactions on the Positive and Negative Syndrome Scale negative symptoms, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, speed of processing, attention/vigilance, working memory and total score of MATRICS Consensus Cognitive Battery (all p<0.05). Post hoc analyses showed significant improvement in Positive and Negative Syndrome Scale negative subscale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, speed of processing, working memory and total score of MATRICS Consensus Cognitive Battery for the dose reduction group compared with those for the maintenance group (all p<0.05). This study indicated that a risperidone or olanzapine dose reduction of 50% may not lead to more severe symptomatology but can improve speed of processing, working memory and negative symptoms in patients with stabilized schizophrenia.
Random sampling and validation of covariance matrices of resonance parameters
NASA Astrophysics Data System (ADS)
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
2018-01-01
Cells interact with and remodel their microenvironment, degrading large extracellular matrix (ECM) proteins (e.g., fibronectin, collagens) and secreting new ECM proteins and small soluble factors (e.g., growth factors, cytokines). Synthetic mimics of the ECM have been developed as controlled cell culture platforms for use in both fundamental and applied studies. However, how cells broadly remodel these initially well-defined matrices remains poorly understood and difficult to probe. In this work, we have established methods for widely examining both large and small proteins that are secreted by cells within synthetic matrices. Specifically, human mesenchymal stem cells (hMSCs), a model primary cell type, were cultured within well-defined poly(ethylene glycol) (PEG)-peptide hydrogels, and these cell-matrix constructs were decellularized and degraded for subsequent isolation and analysis of deposited proteins. Shotgun proteomics using liquid chromatography and mass spectrometry identified a variety of proteins, including the large ECM proteins fibronectin and collagen VI. Immunostaining and confocal imaging confirmed these results and provided visualization of protein organization within the synthetic matrices. Additionally, culture medium was collected from the encapsulated hMSCs, and a Luminex assay was performed to identify secreted soluble factors, including vascular endothelial growth factor (VEGF), endothelial growth factor (EGF), basic fibroblast growth factor (FGF-2), interleukin 8 (IL-8), and tumor necrosis factor alpha (TNF-α). Together, these methods provide a unique approach for studying dynamic reciprocity between cells and synthetic microenvironments and have the potential to provide new biological insights into cell responses during three-dimensional (3D) controlled cell culture. PMID:29552635
Sawicki, Lisa A; Choe, Leila H; Wiley, Katherine L; Lee, Kelvin H; Kloxin, April M
2018-03-12
Cells interact with and remodel their microenvironment, degrading large extracellular matrix (ECM) proteins (e.g., fibronectin, collagens) and secreting new ECM proteins and small soluble factors (e.g., growth factors, cytokines). Synthetic mimics of the ECM have been developed as controlled cell culture platforms for use in both fundamental and applied studies. However, how cells broadly remodel these initially well-defined matrices remains poorly understood and difficult to probe. In this work, we have established methods for widely examining both large and small proteins that are secreted by cells within synthetic matrices. Specifically, human mesenchymal stem cells (hMSCs), a model primary cell type, were cultured within well-defined poly(ethylene glycol) (PEG)-peptide hydrogels, and these cell-matrix constructs were decellularized and degraded for subsequent isolation and analysis of deposited proteins. Shotgun proteomics using liquid chromatography and mass spectrometry identified a variety of proteins, including the large ECM proteins fibronectin and collagen VI. Immunostaining and confocal imaging confirmed these results and provided visualization of protein organization within the synthetic matrices. Additionally, culture medium was collected from the encapsulated hMSCs, and a Luminex assay was performed to identify secreted soluble factors, including vascular endothelial growth factor (VEGF), endothelial growth factor (EGF), basic fibroblast growth factor (FGF-2), interleukin 8 (IL-8), and tumor necrosis factor alpha (TNF-α). Together, these methods provide a unique approach for studying dynamic reciprocity between cells and synthetic microenvironments and have the potential to provide new biological insights into cell responses during three-dimensional (3D) controlled cell culture.
Short-Term Memory in Orthogonal Neural Networks
NASA Astrophysics Data System (ADS)
White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim
2004-04-01
We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.
Accuracy of the Parallel Analysis Procedure with Polychoric Correlations
ERIC Educational Resources Information Center
Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah
2009-01-01
The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…
Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M
2014-01-20
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
Non-continuum, anisotropic nanomechanics of random and aligned electrospun nanofiber matrices
NASA Astrophysics Data System (ADS)
Chery, Daphney; Han, Biao; Mauck, Robert; Shenoy, Vivek; Han, Lin
Polymer nanofiber assemblies are widely used in cell culture and tissue engineering, while their nanomechanical characteristics have received little attention. In this study, to understand their nanoscale structure-mechanics relations, nanofibers of polycaprolactone (PCL) and poly(vinyl alcohol) (PVA) were fabricated via electrospinning, and tested via AFM-nanoindentation with a microspherical tip (R ~10 μm) in PBS. For the hydrophobic, less-swollen PCL, a novel, non-continuum linear F-D dependence was observed, instead of the typical Hertzian F-D3/2 behavior, which is usually expected for continuum materials. This linear trend is likely resulted from the tensile stretch of a few individual nanofibers as they were indented in the normal plane. In contrast, for the hydrophilic, highly swollen PVA, the observed typical Hertzian response indicates the dominance of localized deformation within each nanofiber, which had swollen to become hydrogels. Furthermore, for both matrices, aligned fibers showed significantly higher stiffness than random fibers. These results provide a fundamental basis on the nanomechanics of biomaterials for specialized applications in cell phenotype and tissue repair.
NASA Technical Reports Server (NTRS)
Oline, L.; Medaglia, J.
1972-01-01
The dynamic finite element method was used to investigate elastic stress waves in a plate. Strain displacement and stress strain relations are discussed along with the stiffness and mass matrix. The results of studying point load, and distributed load over small, intermediate, and large radii are reported. The derivation of finite element matrices, and the derivation of lumped and consistent matrices for one dimensional problems with Laplace transfer solutions are included. The computer program JMMSPALL is also included.
Amino Acid Properties Conserved in Molecular Evolution
Rudnicki, Witold R.; Mroczek, Teresa; Cudek, Paweł
2014-01-01
That amino acid properties are responsible for the way protein molecules evolve is natural and is also reasonably well supported both by the structure of the genetic code and, to a large extent, by the experimental measures of the amino acid similarity. Nevertheless, there remains a significant gap between observed similarity matrices and their reconstructions from amino acid properties. Therefore, we introduce a simple theoretical model of amino acid similarity matrices, which allows splitting the matrix into two parts – one that depends only on mutabilities of amino acids and another that depends on pairwise similarities between them. Then the new synthetic amino acid properties are derived from the pairwise similarities and used to reconstruct similarity matrices covering a wide range of information entropies. Our model allows us to explain up to 94% of the variability in the BLOSUM family of the amino acids similarity matrices in terms of amino acid properties. The new properties derived from amino acid similarity matrices correlate highly with properties known to be important for molecular evolution such as hydrophobicity, size, shape and charge of amino acids. This result closes the gap in our understanding of the influence of amino acids on evolution at the molecular level. The methods were applied to the single family of similarity matrices used often in general sequence homology searches, but it is general and can be used also for more specific matrices. The new synthetic properties can be used in analyzes of protein sequences in various biological applications. PMID:24967708
Maximizing synchronizability of duplex networks
NASA Astrophysics Data System (ADS)
Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.
2018-01-01
We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Adjoints and Low-rank Covariance Representation
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.
2000-01-01
Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.
Sparse distributed memory and related models
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1992-01-01
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
Non-equilibrium many-body dynamics following a quantum quench
NASA Astrophysics Data System (ADS)
Vyas, Manan
2017-12-01
We study analytically and numerically the non-equilibrium dynamics of an isolated interacting many-body quantum system following a random quench. We model the system Hamiltonian by Embedded Gaussian Orthogonal Ensemble (EGOE) of random matrices with one plus few-body interactions for fermions. EGOE are paradigmatic models to study the crossover from integrability to chaos in interacting many-body quantum systems. We obtain a generic formulation, based on spectral variances, for describing relaxation dynamics of survival probabilities as a function of rank of interactions. Our analytical results are in good agreement with numerics.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1991-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Evidence for Extended Aqueous Alteration in CR Carbonaceous Chondrites
NASA Technical Reports Server (NTRS)
Trigo-Rodriquez, J. M.; Moyano-Cambero, C. E.; Mestres, N.; Fraxedas, J.; Zolensky, M.; Nakamura, T.; Martins, Z.
2013-01-01
We are currently studying the chemical interrelationships between the main rockforming components of carbonaceous chondrites (hereafter CC), e.g. silicate chondrules, refractory inclusions and metal grains, and the surrounding meteorite matrices. It is thought that the fine-grained materials that form CC matrices are representing samples of relatively unprocessed protoplanetary disk materials [1-3]. In fact, modern non-destructive analytical techniques have shown that CC matrices host a large diversity of stellar grains from many distinguishable stellar sources [4]. Aqueous alteration has played a role in homogeneizing the isotopic content that allows the identification of presolar grains [5]. On the other hand, detailed analytical techniques have found that the aqueously-altered CR, CM and CI chondrite groups contain matrices in which the organic matter has experienced significant processing concomitant to the formation of clays and other minerals. In this sense, clays have been found to be directly associated with complex organics [6, 7]. CR chondrites are particularly relevant in this context as this chondrite group contains abundant metal grains in the interstitial matrix, and inside glassy silicate chondrules. It is important because CR are known for exhibiting a large complexity of organic compounds [8-10], and only metallic Fe is considered essential in Fischer-Tropsch catalysis of organics [11-13]. Therefore, CR chondrites can be considered primitive materials capable to provide clues on the role played by aqueous alteration in the chemical evolution of their parent asteroids.
NASA Astrophysics Data System (ADS)
de Rooij, G. H.
2010-09-01
Soil water is confined behind the menisci of its water-air interface. Catchment-scale fluxes (groundwater recharge, evaporation, transpiration, precipitation, etc.) affect the matric potential, and thereby the interface curvature and the configuration of the phases. In turn, these affect the fluxes (except precipitation), creating feedbacks between pore-scale and catchment-scale processes. Tracking pore-scale processes beyond the Darcy scale is not feasible. Instead, for a simplified system based on the classical Darcy's Law and Laplace-Young Law we i) clarify how menisci transfer pressure from the atmosphere to the soil water, ii) examine large-scale phenomena arising from pore-scale processes, and iii) analyze the relationship between average meniscus curvature and average matric potential. In stagnant water, changing the gravitational potential or the curvature of the air-water interface changes the pressure throughout the water. Adding small amounts of water can thus profoundly affect water pressures in a much larger volume. The pressure-regulating effect of the interface curvature showcases the meniscus as a pressure port that transfers the atmospheric pressure to the water with an offset directly proportional to its curvature. This property causes an extremely rapid rise of phreatic levels in soils once the capillary fringe extends to the soil surface and the menisci flatten. For large bodies of subsurface water, the curvature and vertical position of any meniscus quantify the uniform hydraulic potential under hydrostatic equilibrium. During unit-gradient flow, the matric potential corresponding to the mean curvature of the menisci should provide a good approximation of the intrinsic phase average of the matric potential.
Tensor Dictionary Learning for Positive Definite Matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2015-11-01
Sparse models have proven to be extremely successful in image processing and computer vision. However, a majority of the effort has been focused on sparse representation of vectors and low-rank models for general matrices. The success of sparse modeling, along with popularity of region covariances, has inspired the development of sparse coding approaches for these positive definite descriptors. While in earlier work, the dictionary was formed from all, or a random subset of, the training signals, it is clearly advantageous to learn a concise dictionary from the entire training set. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between sparse coding and dictionary update stages, and different atom update methods are described. A discriminative version of the dictionary learning approach is also proposed, which simultaneously learns dictionaries for different classes in classification or clustering. Experimental results demonstrate the advantage of learning dictionaries from data both from reconstruction and classification viewpoints. Finally, a software library is presented comprising C++ binaries for all the positive definite sparse coding and dictionary learning approaches presented here.
Pathak, Meenakshi; Turner, Mark; Palmer, Cheryn; Coombes, Allan G A
2014-09-01
Microporous, poly (ɛ-caprolactone) (PCL) matrices loaded with the antibacterial, metronidazole were produced by rapidly cooling suspensions of drug powder in PCL solutions in acetone. Drug incorporation in the matrices increased from 2.0% to 10.6% w/w on raising the drug loading of the PCL solution from 5% to 20% w/w measured with respect to the PCL content. Drug loading efficiencies of 40-53% were obtained. Rapid 'burst release' of 35-55% of the metronidazole content was recorded over 24 h when matrices were immersed in simulated vaginal fluid (SVF), due to the presence of large amounts of drug on matrix surface as revealed by Raman microscopy. Gradual release of around 80% of the drug content occurred over the following 12 days. Metronidazole released from PCL matrices in SVF retained antimicrobial activity against Gardnerella vaginalis in vitro at levels up to 97% compared to the free drug. Basic modelling predicted that the concentrations of metronidazole released into vaginal fluid in vivo from a PCL matrix in the form of an intravaginal ring would exceed the minimum inhibitory concentration of metronidazole against G. vaginalis. These findings recommend further investigation of PCL matrices as intravaginal devices for controlled delivery of metronidazole in the treatment and prevention of bacterial vaginosis. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
TiO2-Based Photocatalytic Geopolymers for Nitric Oxide Degradation
Strini, Alberto; Roviello, Giuseppina; Ricciotti, Laura; Ferone, Claudio; Messina, Francesco; Schiavi, Luca; Corsaro, Davide; Cioffi, Raffaele
2016-01-01
This study presents an experimental overview for the development of photocatalytic materials based on geopolymer binders as catalyst support matrices. Particularly, geopolymer matrices obtained from different solid precursors (fly ash and metakaolin), composite systems (siloxane-hybrid, foamed hybrid), and curing temperatures (room temperature and 60 °C) were investigated for the same photocatalyst content (i.e., 3% TiO2 by weight of paste). The geopolymer matrices were previously designed for different applications, ranging from insulating (foam) to structural materials. The photocatalytic activity was evaluated as NO degradation in air, and the results were compared with an ordinary Portland cement reference. The studied matrices demonstrated highly variable photocatalytic performance depending on both matrix constituents and the curing temperature, with promising activity revealed by the geopolymers based on fly ash and metakaolin. Furthermore, microstructural features and titania dispersion in the matrices were assessed by scanning electron microscopy (SEM) and energy dispersive X-ray (EDS) analyses. Particularly, EDS analyses of sample sections indicated segregation effects of titania in the surface layer, with consequent enhancement or depletion of the catalyst concentration in the active sample region, suggesting non-negligible transport phenomena during the curing process. The described results demonstrated that geopolymer binders can be interesting catalyst support matrices for the development of photocatalytic materials and indicated a large potential for the exploitation of their peculiar features. PMID:28773634
Wolfe, Edward W; McGill, Michael T
2011-01-01
This article summarizes a simulation study of the performance of five item quality indicators (the weighted and unweighted versions of the mean square and standardized mean square fit indices and the point-measure correlation) under conditions of relatively high and low amounts of missing data under both random and conditional patterns of missing data for testing contexts such as those encountered in operational administrations of a computerized adaptive certification or licensure examination. The results suggest that weighted fit indices, particularly the standardized mean square index, and the point-measure correlation provide the most consistent information between random and conditional missing data patterns and that these indices perform more comparably for items near the passing score than for items with extreme difficulty values.
Fan, Linpeng; Cai, Zengxiao; Zhang, Kuihua; Han, Feng; Li, Jingliang; He, Chuanglong; Mo, Xiumei; Wang, Xungai; Wang, Hongsheng
2014-05-01
Silk fibroin (SF) from Bombyx mori has many established excellent properties and has found various applications in the biomedical field. However, some abilities or capacities of SF still need improving to meet the need for using practically. Indeed, diverse SF-based composite biomaterials have been developed. Here we report the feasibility of fabricating pantothenic acid (vitamin B5, VB5)-reinforcing SF nanofibrous matrices for biomedical applications through green electrospinning. Results demonstrated the successful loading of D-pantothenic acid hemicalcium salt (VB5-hs) into resulting composite nanofibers. The introduction of VB5-hs did not alter the smooth ribbon-like morphology and the silk I structure of SF, but significantly decreased the mean width of SF fibers. SF conformation transformed into β-sheet from random coil when composite nanofibrous matrices were exposed to 75% (v/v) ethanol vapor. Furthermore, nanofibers still remained good morphology after being soaked in water environment for five days. Interestingly, as-prepared composite nanofibrous matrices supported a higher level of cell viability, especially in a long culture period and significantly assisted skin cells to survive under oxidative stress compared with pure SF nanofibrous matrices. These findings provide a basis for further extending the application of SF in the biomedical field, especially in the personal skin-care field. Copyright © 2013 Elsevier B.V. All rights reserved.
Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2018-05-01
Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2018-06-01
Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.
NASA Astrophysics Data System (ADS)
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-12-01
We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.
Mills, Jeffrey D; Ben-Nun, Michal; Rollin, Kyle; Bromley, Michael W J; Li, Jiabo; Hinde, Robert J; Winstead, Carl L; Sheehy, Jeffrey A; Boatz, Jerry A; Langhoff, Peter W
2016-08-25
Continuing attention has addressed incorportation of the electronically dynamical attributes of biomolecules in the largely static first-generation molecular-mechanical force fields commonly employed in molecular-dynamics simulations. We describe here a universal quantum-mechanical approach to calculations of the electronic energy surfaces of both small molecules and large aggregates on a common basis which can include such electronic attributes, and which also seems well-suited to adaptation in ab initio molecular-dynamics applications. In contrast to the more familiar orbital-product-based methodologies employed in traditional small-molecule computational quantum chemistry, the present approach is based on an "ex-post-facto" method in which Hamiltonian matrices are evaluated prior to wave function antisymmetrization, implemented here in the support of a Hilbert space of orthonormal products of many-electron atomic spectral eigenstates familiar from the van der Waals theory of long-range interactions. The general theory in its various forms incorporates the early semiempirical atoms- and diatomics-in-molecules approaches of Moffitt, Ellison, Tully, Kuntz, and others in a comprehensive mathematical setting, and generalizes the developments of Eisenschitz, London, Claverie, and others addressing electron permutation symmetry adaptation issues, completing these early attempts to treat van der Waals and chemical forces on a common basis. Exact expressions are obtained for molecular Hamiltonian matrices and for associated energy eigenvalues as sums of separate atomic and interaction-energy terms, similar in this respect to the forms of classical force fields. The latter representation is seen to also provide a long-missing general definition of the energies of individual atoms and of their interactions within molecules and matter free from subjective additional constraints. A computer code suite is described for calculations of the many-electron atomic eigenspectra and the pairwise-atomic Hamiltonian matrices required for practical applications. These matrices can be retained as functions of scalar atomic-pair separations and employed in assembling aggregate Hamiltonian matrices, with Wigner rotation matrices providing analytical representations of their angular degrees of freedom. In this way, ab initio potential energy surfaces are obtained in the complete absence of repeated evaluations and transformations of the one- and two-electron integrals at different molecular geometries required in most ab inito molecular calculations, with large Hamiltonian matrix assembly simplified and explicit diagonalizations avoided employing partitioning and Brillouin-Wigner or Rayleigh-Schrödinger perturbation theory. Illustrative applications of the important components of the formalism, selected aspects of the scaling of the approach, and aspects of "on-the-fly" interfaces with Monte Carlo and molecular-dynamics methods are described in anticipation of subsequent applications to biomolecules and other large aggregates.
Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.
Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine
2015-03-15
Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1990-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations
2013-01-01
Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408
SUNPLIN: simulation with uncertainty for phylogenetic investigations.
Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F
2013-11-15
Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Crack Modelling for Radiography
NASA Astrophysics Data System (ADS)
Chady, T.; Napierała, L.
2010-02-01
In this paper, possibility of creation of three-dimensional crack models, both random type and based on real-life radiographic images is discussed. Method for storing cracks in a number of two-dimensional matrices, as well algorithm for their reconstruction into three-dimensional objects is presented. Also the possibility of using iterative algorithm for matching simulated images of cracks to real-life radiographic images is discussed.
Erukhimovich, I Ya; Kudryavtsev, Ya V
2003-08-01
An extended generalization of the dynamic random phase approximation (DRPA) for L-component polymer systems is presented. Unlike the original version of the DRPA, which relates the (LxL) matrices of the collective density-density time correlation functions and the corresponding susceptibilities of concentrated polymer systems to those of the tracer macromolecules and so-called broken-links system (BLS), our generalized DRPA solves this problem for the (5xL) x (5xL) matrices of the coupled susceptibilities and time correlation functions of the component number, kinetic energy and flux densities. The presented technique is used to study propagation of sound and dynamic form-factor in disentangled (Rouse) monodisperse homopolymer melt. The calculated ultrasonic velocity and absorption coefficient reveal substantial frequency dispersion. The relaxation time tau is proportional to the degree of polymerization N, which is N times less than the Rouse time and evidences strong dynamic screening because of interchain interaction. We discuss also some peculiarities of the Brillouin scattering in polymer melts. Besides, a new convenient expression for the dynamic structure function of the single Rouse chain in (q,p) representation is found.
NASA Astrophysics Data System (ADS)
Mazilu, Traian
2017-08-01
This paper approaches the issue of the interaction between moving tandem wheels and an infinite periodically supported rail and points out at the basic characteristics in the steady-state interaction behaviour and in the interaction in the presence of the rail random irregularity. The rail is modelled as an infinite Timoshenko beam resting on supports which are discretely modelling the inertia of the sleepers and ballast and also the viscoelastic features of the rail pads, the ballast and the subgrade. Green‧s matrices of the track method in stationary reference frame were applied so as to conduct the time-domain analysis. This method allows to consider the nonlinearities of the wheel/rail contact and the Doppler effect. The study highlights certain aspects regarding the influence of the wheel base on the wheels/rail contact forces, particularly at the parametric resonance, due to the coincidence between the wheel/rail natural frequency and the passing frequency and also when the rail surface exhibits random irregularity. It has been shown that the wheel/rail dynamic behaviour is less intense when the wheel base equals integer multiple of the sleeper bay.
NASA Astrophysics Data System (ADS)
Zhang, Luozhi; Zhou, Yuanyuan; Huo, Dongming; Li, Jinxi; Zhou, Xin
2018-09-01
A method is presented for multiple-image encryption by using the combination of orthogonal encoding and compressive sensing based on double random phase encoding. As an original thought in optical encryption, it is demonstrated theoretically and carried out by using the orthogonal-basis matrices to build a modified measurement array, being projected onto the images. In this method, all the images can be compressed in parallel into a stochastic signal and be diffused to be a stationary white noise. Meanwhile, each single-image can be separately reestablished by adopting a proper decryption key combination through the block-reconstruction rather than the entire-rebuilt, for its costs of data and decryption time are greatly decreased, which may be promising both in multi-user multiplexing and huge-image encryption/decryption. Besides, the security of this method is characterized by using the bit-length of key, and the parallelism is investigated as well. The simulations and discussions are also made on the effects of decryption as well as the correlation coefficient by using a series of sampling rates, occlusion attacks, keys with various error rates, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Absorption and scattering of light by nonspherical particles. [in atmosphere
NASA Technical Reports Server (NTRS)
Bohren, C. F.
1986-01-01
Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.
Large-Capacity Three-Party Quantum Digital Secret Sharing Using Three Particular Matrices Coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Luo, Ming-Xing; Pieprzyk, Josef; Tao, Li; Liu, Zhi-Ming; Orgun, Mehmet A.
2016-11-01
In this paper, we develop a large-capacity quantum digital secret sharing (QDSS) scheme, combined the Fibonacci- and Lucas-valued orbital angular momentum (OAM) entanglement with the recursive Fibonacci and Lucas matrices. To be exact, Alice prepares pairs of photons in the Fibonacci- and Lucas-valued OAM entangled states, and then allocates them to two participants, say, Bob and Charlie, to establish the secret key. Moreover, the available Fibonacci and Lucas values from the matching entangled states are used as the seed for generating the Fibonacci and Lucas matrices. This is achieved because the entries of the Fibonacci and Lucas matrices are recursive. The secret key can only be obtained jointly by Bob and Charlie, who can further recover the secret. Its security is based on the facts that nonorthogonal states are indistinguishable, and Bob or Charlie detects a Fibonacci number, there is still a twofold uncertainty for Charlie' (Bob') detected value. Supported by the Fundamental Research Funds for the Central Universities under Grant No. XDJK2016C043 and the Doctoral Program of Higher Education under Grant No. SWU115091, the National Natural Science Foundation of China under Grant No. 61303039, the Fundamental Research Funds for the Central Universities under Grant No. XDJK2015C153 and the Doctoral Program of Higher Education under Grant No. SWU114112, and the Financial Support the 1000-Plan of Chongqing by Southwest University under Grant No. SWU116007
Esteve-Altava, Borja; Rasskin-Gutman, Diego
2014-01-01
Craniofacial sutures and synchondroses form the boundaries among bones in the human skull, providing functional, developmental and evolutionary information. Bone articulations in the skull arise due to interactions between genetic regulatory mechanisms and epigenetic factors such as functional matrices (soft tissues and cranial cavities), which mediate bone growth. These matrices are largely acknowledged for their influence on shaping the bones of the skull; however, it is not fully understood to what extent functional matrices mediate the formation of bone articulations. Aiming to identify whether or not functional matrices are key developmental factors guiding the formation of bone articulations, we have built a network null model of the skull that simulates unconstrained bone growth. This null model predicts bone articulations that arise due to a process of bone growth that is uniform in rate, direction and timing. By comparing predicted articulations with the actual bone articulations of the human skull, we have identified which boundaries specifically need the presence of functional matrices for their formation. We show that functional matrices are necessary to connect facial bones, whereas an unconstrained bone growth is sufficient to connect non-facial bones. This finding challenges the role of the brain in the formation of boundaries between bones in the braincase without neglecting its effect on skull shape. Ultimately, our null model suggests where to look for modified developmental mechanisms promoting changes in bone growth patterns that could affect the development and evolution of the head skeleton. PMID:24975579
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Michael, I; Hapeshi, E; Osorio, V; Perez, S; Petrovic, M; Zapata, A; Malato, S; Barceló, D; Fatta-Kassinos, D
2012-07-15
The pilot-scale solar degradation of trimethoprim (TMP) in different water matrices (demineralized water: DW, simulated natural freshwater: SW; simulated wastewater: SWW; and real effluent: RE) was investigated in this study. DOC removal was lower in the case of SW compared to DW, which can be attributed to the presence of inorganic anions which may act as scavengers of the HO·. Furthermore, the presence of organic carbon and higher salt content in SWW and RE led to lower mineralization per dose of hydrogen peroxide compared to DW and SW. Toxicity assays in SWW and RE were also performed indicating that toxicity is attributed to the compounds present in RE and their by-products formed during solar Fenton treatment and not to the intermediates formed by the oxidation of TMP. A large number of compounds generated by the photocatalytic transformation of TMP were identified by UPLC-QToF/MS. The degradation pathway revealed differences among the four matrices; however hydroxylation, demethylation and cleavage reactions were observed in all matrices. To the best of our knowledge this is the first time that TMP degradation products have been identified by adopting a solar Fenton process at a pilot-scale set-up, using four different aqueous matrices. Copyright © 2012 Elsevier B.V. All rights reserved.
Li, Yingjie; Cao, Dan; Wei, Ling; Tang, Yingying; Wang, Jijun
2015-11-01
This paper evaluates the large-scale structure of functional brain networks using graph theoretical concepts and investigates the difference in brain functional networks between patients with depression and healthy controls while they were processing emotional stimuli. Electroencephalography (EEG) activities were recorded from 16 patients with depression and 14 healthy controls when they performed a spatial search task for facial expressions. Correlations between all possible pairs of 59 electrodes were determined by coherence, and the coherence matrices were calculated in delta, theta, alpha, beta, and gamma bands (low gamma: 30-50Hz and high gamma: 50-80Hz, respectively). Graph theoretical analysis was applied to these matrices by using two indexes: the clustering coefficient and the characteristic path length. The global EEG coherence of patients with depression was significantly higher than that of healthy controls in both gamma bands, especially in the high gamma band. The global coherence in both gamma bands from healthy controls appeared higher in negative conditions than in positive conditions. All the brain networks were found to hold a regular and ordered topology during emotion processing. However, the brain network of patients with depression appeared randomized compared with the normal one. The abnormal network topology of patients with depression was detected in both the prefrontal and occipital regions. The negative bias from healthy controls occurred in both gamma bands during emotion processing, while it disappeared in patients with depression. The proposed work studied abnormally increased connectivity of brain functional networks in patients with depression. By combing the clustering coefficient and the characteristic path length, we found that the brain networks of patients with depression and healthy controls had regular networks during emotion processing. Yet the brain networks of the depressed group presented randomization trends. Moreover, negative bias was detected in the healthy controls during emotion processing, while it was not detected in patients with depression, which might be related to the types of negative stimuli used in this study. The brain networks from both patients with depression and healthy controls were found to hold a regular and ordered topology. Yet the brain networks of patients with depression had randomization trends. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Dissimilarities of reduced density matrices and eigenstate thermalization hypothesis
NASA Astrophysics Data System (ADS)
He, Song; Lin, Feng-Li; Zhang, Jia-ju
2017-12-01
We calculate various quantities that characterize the dissimilarity of reduced density matrices for a short interval of length ℓ in a two-dimensional (2D) large central charge conformal field theory (CFT). These quantities include the Rényi entropy, entanglement entropy, relative entropy, Jensen-Shannon divergence, as well as the Schatten 2-norm and 4-norm. We adopt the method of operator product expansion of twist operators, and calculate the short interval expansion of these quantities up to order of ℓ9 for the contributions from the vacuum conformal family. The formal forms of these dissimilarity measures and the derived Fisher information metric from contributions of general operators are also given. As an application of the results, we use these dissimilarity measures to compare the excited and thermal states, and examine the eigenstate thermalization hypothesis (ETH) by showing how they behave in high temperature limit. This would help to understand how ETH in 2D CFT can be defined more precisely. We discuss the possibility that all the dissimilarity measures considered here vanish when comparing the reduced density matrices of an excited state and a generalized Gibbs ensemble thermal state. We also discuss ETH for a microcanonical ensemble thermal state in a 2D large central charge CFT, and find that it is approximately satisfied for a small subsystem and violated for a large subsystem.
What is the effect of matrices on cartilage repair? A systematic review.
Wylie, James D; Hartley, Melissa K; Kapron, Ashley L; Aoki, Stephen K; Maak, Travis G
2015-05-01
Articular cartilage has minimal endogenous ability to undergo repair. Multiple chondral restoration strategies have been attempted with varied results. The purpose of our review was to determine: (1) Does articular chondrocyte transplantation or matrix-assisted articular chondrocyte transplantation provide better patient-reported outcomes scores, MRI morphologic measurements, or histologic quality of repair tissue compared with microfracture in prospective comparative studies of articular cartilage repair; and (2) which available matrices for matrix-assisted articular chondrocyte transplantation show the best patient-reported outcomes scores, MRI morphologic measurements, or histologic quality of repair tissue? We conducted a systematic review of PubMed, CINAHL, and MEDLINE from March 2004 to February 2014 using keywords determined to be important for articular cartilage repair, including "cartilage", "chondral", "cell source", "chondrocyte", "matrix", "augment", "articular", "joint", "repair", "treatment", "regeneration", and "restoration" to find articles related to cell-based articular cartilage repair of the knee. The articles were reviewed by two authors (JDW, MKH), our study exclusion criteria were applied, and articles were determined to be relevant (or not) to the research questions. The Methodological Index for Nonrandomized Studies (MINORS) scale was used to judge the quality of nonrandomized manuscripts used in this review and the Jadad score was used to judge the quality of randomized trials. Seventeen articles were reviewed for the first research question and 83 articles were reviewed in the second research question from 301 articles identified in the original systematic search. The average MINORS score was 9.9 (62%) for noncomparative studies and 16.1 (67%) for comparative studies. The average Jadad score was 2.3 for the randomized studies. Articular chondrocyte transplantation shows better patient-reported outcomes at 5 years in patients without chronic symptoms preoperatively compared with microfracture (p = 0.026). Matrix-assisted articular chondrocyte transplantation consistently showed improved patient-reported functional outcomes compared with microfracture (p values ranging from < 0.001 to 0.029). Hyalograft C(®) (Anika Therapeutics Inc, Bedford, MA, USA) and Chondro-gide(®) (Genzyme Biosurgery, Kastrup, Denmark) are the matrices with the most published evidence in the literature, but no studies comparing different matrices met our inclusion criteria, because the literature consists only of uncontrolled case series. Matrix-assisted articular chondrocyte transplantation leads to better patient-reported outcomes in cartilage repair compared with microfracture; however, future prospective research is needed comparing different matrices to determine which products optimize cartilage repair. Level IV, therapeutic study.
Oulkar, Dasharath; Goon, Arnab; Dhanshetty, Manisha; Khan, Zareen; Satav, Sagar; Banerjee, Kaushik
2018-04-03
This paper reports a sensitive and cost effective method of analysis for aflatoxins B1, B2, G1 and G2. The sample preparation method was primarily optimised in peanuts, followed by its validation in a range of peanut-processed products and cereal (rice, corn, millets) matrices. Peanut slurry [12.5 g peanut + 12.5 mL water] was extracted with methanol: water (8:2, 100 mL), cleaned through an immunoaffinity column and thereafter measured directly by ultra-performance liquid chromatography-fluorescence (UPLC-FLD) detection, within a chromatographic runtime of 5 minutes. The use of a large volume flow cell in the FLD nullified the requirement of any post-column derivatisation and provided the lowest ever reported limits of quantification of 0.025 for B1 and G1 and 0.01 μg/kg for B2 and G2. The single laboratory validation of the method provided acceptable selectivity, linearity, recovery and precision for reliable quantifications in all the test matrices as well as demonstrated compliance with the EC 401/2006 guidelines for analytical quality control of aflatoxins in foodstuffs.
Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.
Behrisch, Michael; Bach, Benjamin; Hund, Michael; Delz, Michael; Von Ruden, Laura; Fekete, Jean-Daniel; Schreck, Tobias
2017-01-01
In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.
Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits
NASA Astrophysics Data System (ADS)
Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas
2018-04-01
We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.
Modified conjugate gradient method for diagonalizing large matrices.
Jie, Quanlin; Liu, Dunhuan
2003-11-01
We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.
Smith, Robert C; Boules, Sylvia; Mattiuz, Sanela; Youssef, Mary; Tobe, Russell H; Sershen, Henry; Lajtha, Abel; Nolan, Karen; Amiaz, Revital; Davis, John M
2015-10-01
Schizophrenia is characterized by cognitive deficits which persist after acute symptoms have been treated or resolved. Transcranial direct current stimulation (tDCS) has been reported to improve cognition and reduce smoking craving in healthy subjects but has not been as carefully evaluated in a randomized controlled study for these effects in schizophrenia. We conducted a randomized double-blind, sham-controlled study of the effects of 5 sessions of tDCS (2 milliamps for 20minutes) on cognition, psychiatric symptoms, and smoking and cigarette craving in 37 outpatients with schizophrenia or schizoaffective disorder who were current smokers. Thirty subjects provided evaluable data on the MATRICS Consensus Cognitive Battery (MCCB), with the primary outcome measure, the MCCB Composite score. Active compared to sham tDCS subjects showed significant improvements after the fifth tDCS session in MCCB Composite score (p=0.008) and on the MCCB Working Memory (p=0.002) and Attention-Vigilance (p=0.027) domain scores, with large effect sizes. MCCB Composite and Working Memory domain scores remained significant at Benjamini-Hochberg corrected significance levels (α=0.05). There were no statistically significant effects on secondary outcome measures of psychiatric symptoms (PANSS scores), hallucinations, cigarette craving, or cigarettes smoked. The positive effects of tDCS on cognitive performance suggest a potential efficacious treatment for cognitive deficits in partially recovered chronic schizophrenia outpatients that should be further investigated. Copyright © 2015 Elsevier B.V. All rights reserved.
Broken Ergodicity in Two-Dimensional Homogeneous Magnetohydrodynamic Turbulence
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2010-01-01
Two-dimensional (2-D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3-D) homogeneous MHD turbulence.The se features include several ideal invariants, along with the phenomenon of broken ergodicity. Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo.Recently, the origin of broken ergodicity in 3-D MHD turbulence that is manifest in the lowest wavenumbers was explained. Here, a detailed description of the origins of broken ergodicity in 2-D MHD turbulence is presented. It will be seen that broken ergodicity in ideal 2-D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions.T he origins of broken ergodicity in ideal 2-D homogeneous MHD turbulence are found through an eigen analysis of the covariance matrices of the modal probability density functions.It will also be shown that when the lowest wavenumber magnetic field becomes quasi-stationary, the higher wavenumber modes can propagate as Alfven waves on these almost static large-scale magnetic structures
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Product of Ginibre matrices: Fuss-Catalan and Raney distributions
NASA Astrophysics Data System (ADS)
Penson, Karol A.; Życzkowski, Karol
2011-06-01
Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.
Product of Ginibre matrices: Fuss-Catalan and Raney distributions.
Penson, Karol A; Zyczkowski, Karol
2011-06-01
Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.
Probabilistic Signal Recovery and Random Matrices
2016-12-08
applications in statistics , biomedical data analysis, quantization, dimen- sion reduction, and networks science. 1. High-dimensional inference and geometry Our...low-rank approxima- tion, with applications to community detection in networks, Annals of Statistics 44 (2016), 373–400. [7] C. Le, E. Levina, R...approximation, with applications to community detection in networks, Annals of Statistics 44 (2016), 373–400. C. Le, E. Levina, R. Vershynin, Concentration
Learning Circulant Sensing Kernels
2014-03-01
Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a
Improved Estimation and Interpretation of Correlations in Neural Circuits
Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.
2015-01-01
Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this ‘sparse+latent’ estimator likely provides a more physiologically relevant representation of the functional connectivity in densely sampled recordings than the sample correlation matrix. PMID:25826696
Esteve-Altava, Borja; Rasskin-Gutman, Diego
2014-09-01
Craniofacial sutures and synchondroses form the boundaries among bones in the human skull, providing functional, developmental and evolutionary information. Bone articulations in the skull arise due to interactions between genetic regulatory mechanisms and epigenetic factors such as functional matrices (soft tissues and cranial cavities), which mediate bone growth. These matrices are largely acknowledged for their influence on shaping the bones of the skull; however, it is not fully understood to what extent functional matrices mediate the formation of bone articulations. Aiming to identify whether or not functional matrices are key developmental factors guiding the formation of bone articulations, we have built a network null model of the skull that simulates unconstrained bone growth. This null model predicts bone articulations that arise due to a process of bone growth that is uniform in rate, direction and timing. By comparing predicted articulations with the actual bone articulations of the human skull, we have identified which boundaries specifically need the presence of functional matrices for their formation. We show that functional matrices are necessary to connect facial bones, whereas an unconstrained bone growth is sufficient to connect non-facial bones. This finding challenges the role of the brain in the formation of boundaries between bones in the braincase without neglecting its effect on skull shape. Ultimately, our null model suggests where to look for modified developmental mechanisms promoting changes in bone growth patterns that could affect the development and evolution of the head skeleton. © 2014 Anatomical Society.
NASA Astrophysics Data System (ADS)
Castro, María Eugenia; Díaz, Javier; Muñoz-Caro, Camelia; Niño, Alfonso
2011-09-01
We present a system of classes, SHMatrix, to deal in a unified way with the computation of eigenvalues and eigenvectors in real symmetric and Hermitian matrices. Thus, two descendant classes, one for the real symmetric and other for the Hermitian cases, override the abstract methods defined in a base class. The use of the inheritance relationship and polymorphism allows handling objects of any descendant class using a single reference of the base class. The system of classes is intended to be the core element of more sophisticated methods to deal with large eigenvalue problems, as those arising in the variational treatment of realistic quantum mechanical problems. The present system of classes allows computing a subset of all the possible eigenvalues and, optionally, the corresponding eigenvectors. Comparison with well established solutions for analogous eigenvalue problems, as those included in LAPACK, shows that the present solution is competitive against them. Program summaryProgram title: SHMatrix Catalogue identifier: AEHZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2616 No. of bytes in distributed program, including test data, etc.: 127 312 Distribution format: tar.gz Programming language: Standard ANSI C++. Computer: PCs and workstations. Operating system: Linux, Windows. Classification: 4.8. Nature of problem: The treatment of problems involving eigensystems is a central topic in the quantum mechanical field. Here, the use of the variational approach leads to the computation of eigenvalues and eigenvectors of real symmetric and Hermitian Hamiltonian matrices. Realistic models with several degrees of freedom leads to large (sometimes very large) matrices. Different techniques, such as divide and conquer, can be used to factorize the matrices in order to apply a parallel computing approach. However, it is still interesting to have a core procedure able to tackle the computation of eigenvalues and eigenvectors once the matrix has been factorized to pieces of enough small size. Several available software packages, such as LAPACK, tackled this problem under the traditional imperative programming paradigm. In order to ease the modelling of complex quantum mechanical models it could be interesting to apply an object-oriented approach to the treatment of the eigenproblem. This approach offers the advantage of a single, uniform treatment for the real symmetric and Hermitian cases. Solution method: To reach the above goals, we have developed a system of classes: SHMatrix. SHMatrix is composed by an abstract base class and two descendant classes, one for real symmetric matrices and the other for the Hermitian case. The object-oriented characteristics of inheritance and polymorphism allows handling both cases using a single reference of the base class. The basic computing strategy applied in SHMatrix allows computing subsets of eigenvalues and (optionally) eigenvectors. The tests performed show that SHMatrix is competitive, and more efficient for large matrices, than the equivalent routines of the LAPACK package. Running time: The examples included in the distribution take only a couple of seconds to run.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
NASA Astrophysics Data System (ADS)
Martikainen, Julia; Penttilä, Antti; Gritsevich, Maria; Muinonen, Karri
2017-10-01
Asteroids have remained mostly the same for the past 4.5 billion years, and provide us information on the origin, evolution and current state of the Solar System. Asteroids and meteorites can be linked by matching their respective reflectance spectra. This is difficult, because spectral features depend strongly on the surface properties, and meteorite surfaces are free of regolith dust present in asteroids. Furthermore, asteroid surfaces experience space weathering which affects their spectral features.We present a novel simulation framework for assessing the spectral properties of meteorites and asteroids and matching their reflectance spectra. The simulations are carried out by utilizing a light-scattering code that takes inhomogeneous waves into account and simulates light scattering by Gaussian-random-sphere particles large compared to the wavelength of the incident light. The code uses incoherent input and computes phase matrices by utilizing incoherent scattering matrices. Reflectance spectra are modeled by combining olivine, pyroxene, and iron, the most common materials that dominate the spectral features of asteroids and meteorites. Space weathering is taken into account by adding nanoiron into the modeled asteroid spectrum. The complex refractive indices needed for the simulations are obtained from existing databases, or derived using an optimization that utilizes our ray-optics code and the measured spectrum of the material.We demonstrate our approach by applying it to the reflectance spectrum of (4) Vesta and the reflectance spectrum of the Johnstown meteorite measured with the University of Helsinki integrating-sphere UV-Vis-NIR spectrometer.Acknowledgments. The research is funded by the ERC Advanced Grant No. 320773 (SAEMPL).
von Cramon-Taubadel, Noreen; Schroeder, Lauren
2016-10-01
Estimation of the variance-covariance (V/CV) structure of fragmentary bioarchaeological populations requires the use of proxy extant V/CV parameters. However, it is currently unclear whether extant human populations exhibit equivalent V/CV structures. Random skewers (RS) and hierarchical analyses of common principal components (CPC) were applied to a modern human cranial dataset. Cranial V/CV similarity was assessed globally for samples of individual populations (jackknifed method) and for pairwise population sample contrasts. The results were examined in light of potential explanatory factors for covariance difference, such as geographic region, among-group distance, and sample size. RS analyses showed that population samples exhibited highly correlated multivariate responses to selection, and that differences in RS results were primarily a consequence of differences in sample size. The CPC method yielded mixed results, depending upon the statistical criterion used to evaluate the hierarchy. The hypothesis-testing (step-up) approach was deemed problematic due to sensitivity to low statistical power and elevated Type I errors. In contrast, the model-fitting (lowest AIC) approach suggested that V/CV matrices were proportional and/or shared a large number of CPCs. Pairwise population sample CPC results were correlated with cranial distance, suggesting that population history explains some of the variability in V/CV structure among groups. The results indicate that patterns of covariance in human craniometric samples are broadly similar but not identical. These findings have important implications for choosing extant covariance matrices to use as proxy V/CV parameters in evolutionary analyses of past populations. © 2016 Wiley Periodicals, Inc.
Single-qubit decoherence under a separable coupling to a random matrix environment
NASA Astrophysics Data System (ADS)
Carrera, M.; Gorin, T.; Seligman, T. H.
2014-08-01
This paper describes the dynamics of a quantum two-level system (qubit) under the influence of an environment modeled by an ensemble of random matrices. In distinction to earlier work, we consider here separable couplings and focus on a regime where the decoherence time is of the same order of magnitude as the environmental Heisenberg time. We derive an analytical expression in the linear response approximation, and study its accuracy by comparison with numerical simulations. We discuss a series of unusual properties, such as purity oscillations, strong signatures of spectral correlations (in the environment Hamiltonian), memory effects, and symmetry-breaking equilibrium states.
“SNP Snappy”: A Strategy for Fast Genome-Wide Association Studies Fitting a Full Mixed Model
Meyer, Karin; Tier, Bruce
2012-01-01
A strategy to reduce computational demands of genome-wide association studies fitting a mixed model is presented. Improvements are achieved by utilizing a large proportion of calculations that remain constant across the multiple analyses for individual markers involved, with estimates obtained without inverting large matrices. PMID:22021386
Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.
Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue
2018-02-01
Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.
Si, Yang; Wang, Xueqin; Dou, Lvye; Yu, Jianyong; Ding, Bin
2018-04-01
Ultralight aerogels that are both highly resilient and compressible have been fabricated from various materials including polymer, carbon, and metal. However, it has remained a great challenge to realize high elasticity in aerogels solely based on ceramic components. We report a scalable strategy to create superelastic lamellar-structured ceramic nanofibrous aerogels (CNFAs) by combining SiO 2 nanofibers with aluminoborosilicate matrices. This approach causes the random-deposited SiO 2 nanofibers to assemble into elastic ceramic aerogels with tunable densities and desired shapes on a large scale. The resulting CNFAs exhibit the integrated properties of flyweight densities of >0.15 mg cm -3 , rapid recovery from 80% strain, zero Poisson's ratio, and temperature-invariant superelasticity to 1100°C. The integral ceramic nature also provided the CNFAs with robust fire resistance and thermal insulation performance. The successful synthesis of these fascinating materials may provide new insights into the development of ceramics in a lightweight, resilient, and structurally adaptive form.
Wang, Xueqin; Dou, Lvye; Yu, Jianyong
2018-01-01
Ultralight aerogels that are both highly resilient and compressible have been fabricated from various materials including polymer, carbon, and metal. However, it has remained a great challenge to realize high elasticity in aerogels solely based on ceramic components. We report a scalable strategy to create superelastic lamellar-structured ceramic nanofibrous aerogels (CNFAs) by combining SiO2 nanofibers with aluminoborosilicate matrices. This approach causes the random-deposited SiO2 nanofibers to assemble into elastic ceramic aerogels with tunable densities and desired shapes on a large scale. The resulting CNFAs exhibit the integrated properties of flyweight densities of >0.15 mg cm−3, rapid recovery from 80% strain, zero Poisson’s ratio, and temperature-invariant superelasticity to 1100°C. The integral ceramic nature also provided the CNFAs with robust fire resistance and thermal insulation performance. The successful synthesis of these fascinating materials may provide new insights into the development of ceramics in a lightweight, resilient, and structurally adaptive form. PMID:29719867
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Cycle-expansion method for the Lyapunov exponent, susceptibility, and higher moments.
Charbonneau, Patrick; Li, Yue Cathy; Pfister, Henry D; Yaida, Sho
2017-09-01
Lyapunov exponents characterize the chaotic nature of dynamical systems by quantifying the growth rate of uncertainty associated with the imperfect measurement of initial conditions. Finite-time estimates of the exponent, however, experience fluctuations due to both the initial condition and the stochastic nature of the dynamical path. The scale of these fluctuations is governed by the Lyapunov susceptibility, the finiteness of which typically provides a sufficient condition for the law of large numbers to apply. Here, we obtain a formally exact expression for this susceptibility in terms of the Ruelle dynamical ζ function for one-dimensional systems. We further show that, for systems governed by sequences of random matrices, the cycle expansion of the ζ function enables systematic computations of the Lyapunov susceptibility and of its higher-moment generalizations. The method is here applied to a class of dynamical models that maps to static disordered spin chains with interactions stretching over a varying distance and is tested against Monte Carlo simulations.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
Density-matrix based determination of low-energy model Hamiltonians from ab initio wavefunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Changlani, Hitesh J.; Zheng, Huihuo; Wagner, Lucas K.
2015-09-14
We propose a way of obtaining effective low energy Hubbard-like model Hamiltonians from ab initio quantum Monte Carlo calculations for molecular and extended systems. The Hamiltonian parameters are fit to best match the ab initio two-body density matrices and energies of the ground and excited states, and thus we refer to the method as ab initio density matrix based downfolding. For benzene (a finite system), we find good agreement with experimentally available energy gaps without using any experimental inputs. For graphene, a two dimensional solid (extended system) with periodic boundary conditions, we find the effective on-site Hubbard U{sup ∗}/t tomore » be 1.3 ± 0.2, comparable to a recent estimate based on the constrained random phase approximation. For molecules, such parameterizations enable calculation of excited states that are usually not accessible within ground state approaches. For solids, the effective Hamiltonian enables large-scale calculations using techniques designed for lattice models.« less
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
A probabilistic model of a porous heat exchanger
NASA Technical Reports Server (NTRS)
Agrawal, O. P.; Lin, X. A.
1995-01-01
This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.
Non-local transport in turbulent MHD convection
NASA Technical Reports Server (NTRS)
Miesch, Mark; Brandenburg, Axel; Zweibel, Ellen; Toomre, Juri
1995-01-01
The nonlocal non-diffusive transport of passive scalars in turbulent magnetohydrodynamic (MHD) convection is investigated using transilient matrices. These matrices describe the probability that a tracer particle beginning at one position in a flow will be advected to another position after some time. A method for the calculation of these matrices from simulation data which involves following the trajectories of passive tracer particles and calculating their transport statistics, is presented. The method is applied to study the transport in several simulations of turbulent, rotating, three dimensional compressible, penetrative MDH convection. Transport coefficients and other diagnostics are used to quantify the transport, which is found to resemble advection more closely than diffusion. Some of the results are found to have direct relevance to other physical problems, such as the light element depletion in sun-type stars. The large kurtosis found for downward moving particles at the base of the convection zone implies several extreme events.
Yang Baxter and anisotropic sigma and lambda models, cyclic RG and exact S-matrices
NASA Astrophysics Data System (ADS)
Appadu, Calan; Hollowood, Timothy J.; Price, Dafydd; Thompson, Daniel C.
2017-09-01
Integrable deformation of SU(2) sigma and lambda models are considered at the classical and quantum levels. These are the Yang-Baxter and XXZ-type anisotropic deformations. The XXZ type deformations are UV safe in one regime, while in another regime, like the Yang-Baxter deformations, they exhibit cyclic RG behaviour. The associ-ated affine quantum group symmetry, realized classically at the Poisson bracket level, has q a complex phase in the UV safe regime and q real in the cyclic RG regime, where q is an RG invariant. Based on the symmetries and RG flow we propose exact factorizable S-matrices to describe the scattering of states in the lambda models, from which the sigma models follow by taking a limit and non-abelian T-duality. In the cyclic RG regimes, the S-matrices are periodic functions of rapidity, at large rapidity, and in the Yang-Baxter case violate parity.
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
NDMA formation kinetics from three pharmaceuticals in four water matrices.
Shen, Ruqiao; Andrews, Susan A
2011-11-01
N, N-nitrosodimethylamine (NDMA) is an emerging disinfection by-product (DBP) that has been widely detected in many drinking water systems and commonly associated with the chloramine disinfection process. Some amine-based pharmaceuticals have been demonstrated to form NDMA during chloramination, but studies regarding the reaction kinetics are largely lacking. This study investigates the NDMA formation kinetics from ranitidine, chlorphenamine, and doxylamine under practical chloramine disinfection conditions. The formation profile was monitored in both lab-grade water and real water matrices, and a statistical model is proposed to describe and predict the NDMA formation from selected pharmaceuticals in various water matrices. The results indicate the significant impact of water matrix components and reaction time on the NDMA formation from selected pharmaceuticals, and provide fresh insights on the estimation of ultimate NDMA formation potential from pharmaceutical precursors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Production of mycotoxins by filamentous fungi in untreated surface water.
Oliveira, Beatriz R; Mata, Ana T; Ferreira, João P; Barreto Crespo, Maria T; Pereira, Vanessa J; Bronze, Maria R
2018-04-16
Several research studies reported that mycotoxins and other metabolites can be produced by fungi in certain matrices such as food. In recent years, attention has been drawn to the wide occurrence and identification of fungi in drinking water sources. Due to the large demand of water for drinking, watering, or food production purposes, it is imperative that further research is conducted to investigate if mycotoxins may be produced in water matrices. This paper describes the results obtained when a validated analytical method was applied to detect and quantify the presence of mycotoxins as a result of fungi inoculation and growth in untreated surface water. Aflatoxins B1 and B2, fumonisin B3, and ochratoxin A were detected at concentrations up to 35 ng/L. These results show that fungi can produce mycotoxins in water matrices in a non-negligible quantity and, as such, attention must be given to the presence of fungi in water.
Long-range correlations in time series generated by time-fractional diffusion: A numerical study
NASA Astrophysics Data System (ADS)
Barbieri, Davide; Vivoli, Alessandro
2005-09-01
Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.
Orbit Determination Using Vinti’s Solution
2016-09-15
Surveillance Network STK Systems Tool Kit TBP Two Body Problem TLE Two-line Element Set xv Acronym Definition UKF Unscented Kalman Filter WPAFB Wright...simplicity, stability, and speed. On the other hand, Kalman filters would be best suited for sequential estimation of stochastic or random components of a...be likened to how an Unscented Kalman Filter samples a system’s nonlinearities directly, avoiding linearizing the dynamics in the partials matrices
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Multi-dimensional Fokker-Planck equation analysis using the modified finite element method
NASA Astrophysics Data System (ADS)
Náprstek, J.; Král, R.
2016-09-01
The Fokker-Planck equation (FPE) is a frequently used tool for the solution of cross probability density function (PDF) of a dynamic system response excited by a vector of random processes. FEM represents a very effective solution possibility, particularly when transition processes are investigated or a more detailed solution is needed. Actual papers deal with single degree of freedom (SDOF) systems only. So the respective FPE includes two independent space variables only. Stepping over this limit into MDOF systems a number of specific problems related to a true multi-dimensionality must be overcome. Unlike earlier studies, multi-dimensional simplex elements in any arbitrary dimension should be deployed and rectangular (multi-brick) elements abandoned. Simple closed formulae of integration in multi-dimension domain have been derived. Another specific problem represents the generation of multi-dimensional finite element mesh. Assembling of system global matrices should be subjected to newly composed algorithms due to multi-dimensionality. The system matrices are quite full and no advantages following from their sparse character can be profited from, as is commonly used in conventional FEM applications in 2D/3D problems. After verification of partial algorithms, an illustrative example dealing with a 2DOF non-linear aeroelastic system in combination with random and deterministic excitations is discussed.
NASA Astrophysics Data System (ADS)
Singh, Hukum
2016-12-01
A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.
Study of alumina-trichite reinforcement of a nickel-based matric by means of powder metallurgy
NASA Technical Reports Server (NTRS)
Walder, A.; Hivert, A.
1982-01-01
Research was conducted on reinforcing nickel based matrices with alumina trichites by using powder metallurgy. Alumina trichites previously coated with nickel are magnetically aligned. The felt obtained is then sintered under a light pressure at a temperature just below the melting point of nickel. The halogenated atmosphere technique makes it possible to incorporate a large number of additive elements such as chromium, titanium, zirconium, tantalum, niobium, aluminum, etc. It does not appear that going from laboratory scale to a semi-industrial scale in production would create any major problems.
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Matrices pattern using FIB; 'Out-of-the-box' way of thinking.
Fleger, Y; Gotlib-Vainshtein, K; Talyosef, Y
2017-03-01
Focused ion beam (FIB) is an extremely valuable tool in nanopatterning and nanofabrication for potentially high-resolution patterning, especially when refers to He ion beam microscopy. The work presented here demonstrates an 'out-of-the-box' method of writing using FIB, which enables creating very large matrices, up to the beam-shift limitation, in short times and with high accuracy unachievable by any other writing technique. The new method allows combining different shapes in nanometric dimensions and high resolutions for wide ranges. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Fan, Guangyi; Jiao, Yu; Zhang, He; Huang, Ronglian; Zheng, Zhe; Bian, Chao; Deng, Yuewen; Wang, Qingheng; Wang, Zhongduo; Liang, Xinming; Liang, Haiying; Shi, Chengcheng; Zhao, Xiaoxia; Sun, Fengming; Hao, Ruijuan; Bai, Jie; Liu, Jialiang; Chen, Wenbin; Liang, Jinlian; Liu, Weiqing; Xu, Zhe; Shi, Qiong; Xu, Xun
2017-01-01
Abstract Nacre, the iridescent material found in pearls and shells of molluscs, is formed through an extraordinary process of matrix-assisted biomineralization. Despite recent advances, many aspects of the biomineralization process and its evolutionary origin remain unknown. The pearl oyster Pinctada fucata martensii is a well-known master of biomineralization, but the molecular mechanisms that underlie its production of shells and pearls are not fully understood. We sequenced the highly polymorphic genome of the pearl oyster and conducted multi-omic and biochemical studies to probe nacre formation. We identified a large set of novel proteins participating in matrix-framework formation, many in expanded families, including components similar to that found in vertebrate bones such as collagen-related VWA-containing proteins, chondroitin sulfotransferases, and regulatory elements. Considering that there are only collagen-based matrices in vertebrate bones and chitin-based matrices in most invertebrate skeletons, the presence of both chitin and elements of collagen-based matrices in nacre suggests that elements of chitin- and collagen-based matrices have deep roots and might be part of an ancient biomineralizing matrix. Our results expand the current shell matrix-framework model and provide new insights into the evolution of diverse biomineralization systems. PMID:28873964
Visualization of newt aragonitic otoconial matrices using transmission electron microscopy
NASA Technical Reports Server (NTRS)
Steyger, P. S.; Wiederhold, M. L.
1995-01-01
Otoconia are calcified protein matrices within the gravity-sensing organs of the vertebrate vestibular system. These protein matrices are thought to originate from the supporting or hair cells in the macula during development. Previous studies of mammalian calcitic, barrel-shaped otoconia revealed an organized protein matrix consisting of a thin peripheral layer, a well-defined organic core and a flocculent matrix inbetween. No studies have reported the microscopic organization of the aragonitic otoconial matrix, despite its protein characterization. Pote et al. (1993b) used densitometric methods and inferred that prismatic (aragonitic) otoconia have a peripheral protein distribution, compared to that described for the barrel-shaped, calcitic otoconia of birds, mammals, and the amphibian utricle. By using tannic acid as a negative stain, we observed three kinds of organic matrices in preparations of fixed, decalcified saccular otoconia from the adult newt: (1) fusiform shapes with a homogenous electron-dense matrix; (2) singular and multiple strands of matrix; and (3) more significantly, prismatic shapes outlined by a peripheral organic matrix. These prismatic shapes remain following removal of the gelatinous matrix, revealing an internal array of organic matter. We conclude that prismatic otoconia have a largely peripheral otoconial matrix, as inferred by densitometry.
Caustics, counting maps and semi-classical asymptotics
NASA Astrophysics Data System (ADS)
Ercolani, N. M.
2011-02-01
This paper develops a deeper understanding of the structure and combinatorial significance of the partition function for Hermitian random matrices. The coefficients of the large N expansion of the logarithm of this partition function, also known as the genus expansion (and its derivatives), are generating functions for a variety of graphical enumeration problems. The main results are to prove that these generating functions are, in fact, specific rational functions of a distinguished irrational (algebraic) function, z0(t). This distinguished function is itself the generating function for the Catalan numbers (or generalized Catalan numbers, depending on the choice of weight of the parameter t). It is also a solution of the inviscid Burgers equation for certain initial data. The shock formation, or caustic, of the Burgers characteristic solution is directly related to the poles of the rational forms of the generating functions. As an intriguing application, one gains new insights into the relation between certain derivatives of the genus expansion, in a double-scaling limit, and the asymptotic expansion of the first Painlevé transcendent. This provides a precise expression of the Painlevé asymptotic coefficients directly in terms of the coefficients of the partial fractions expansion of the rational form of the generating functions established in this paper. Moreover, these insights point towards a more general program relating the first Painlevé hierarchy to the higher order structure of the double-scaling limit through the specific rational structure of generating functions in the genus expansion. The paper closes with a discussion of the relation of this work to recent developments in understanding the asymptotics of graphical enumeration. As a by-product, these results also yield new information about the asymptotics of recurrence coefficients for orthogonal polynomials with respect to exponential weights, the calculation of correlation functions for certain tied random walks on a 1D lattice, and the large time asymptotics of random matrix partition functions.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less
Neng, N R; Santalla, R P; Nogueira, J M F
2014-08-01
Stir bar sorptive extraction with in-situ derivatization using sodium tetrahydridoborate (NaBH4) followed by liquid desorption and large volume injection-gas chromatography-mass spectrometry detection under the selected ion monitoring mode (SBSE(NaBH4)in-situ-LD/LVI-GC-MS(SIM)) was successfully developed for the determination of tributyltin (TBT) in environmental water matrices. NaBH4 proved to be an effective and easy in-situ speciation agent for TBT in aqueous media, allowing the formation of adducts with enough stability and suitable polarity for SBSE analysis. Assays performed on water samples spiked at the 10.0μg/L, yielded convenient recoveries (68.2±3.0%), showed good accuracy, suitable precision (RSD<9.0%), low detection limits (23ng/L) and excellent linear dynamic range (r(2)=0.9999) from 0.1 to 170.0µg/L, under optimized experimental conditions. By using the standard addition method, the application of the present methodology to real surface water samples allowed very good performance at the trace level. The proposed methodology proved to be a feasible alternative for routine quality control analysis, easy to implement, reliable and sensitive to monitor TBT in environmental water matrices. Copyright © 2014 Elsevier B.V. All rights reserved.
High-throughput screening of dye-ligands for chromatography.
Kumar, Sunil; Punekar, Narayan S
2014-01-01
Dye-ligand-based chromatography has become popular after Cibacron Blue, the first reactive textile dye, found application for protein purification. Many other textile dyes have since been successfully used to purify a number of proteins and enzymes. While the exact nature of their interaction with target proteins is often unclear, dye-ligands are thought to mimic the structural features of their corresponding substrates, cofactors, etc. The dye-ligand affinity matrices are therefore considered pseudo-affinity matrices. In addition, dye-ligands may simply bind with proteins due to electrostatic, hydrophobic, and hydrogen-bonding interactions. Because of their low cost, ready availability, and structural stability, dye-ligand affinity matrices have gained much popularity. Choice of a large number of dye structures offers a range of matrices to be prepared and tested. When presented in the high-throughput screening mode, these dye-ligand matrices provide a formidable tool for protein purification. One could pick from the list of dye-ligands already available or build a systematic library of such structures for use. A high-throughput screen may be set up to choose best dye-ligand matrix as well as ideal conditions for binding and elution, for a given protein. The mode of operation could be either manual or automated. The technology is available to test the performance of dye-ligand matrices in small volumes in an automated liquid-handling workstation. Screening a systematic library of dye-ligand structures can help establish a structure-activity relationship. While the origins of dye-ligand chromatography lay in exploiting pseudo-affinity, it is now possible to design very specific biomimetic dye structures. High-throughput screening will be of value in this endeavor as well.
NASA Astrophysics Data System (ADS)
Leskinen, Stephaney D.; Schlemmer, Sarah M.; Kearns, Elizabeth A.; Lim, Daniel V.
2009-02-01
The development of rapid assays for detection of microbial pathogens in complex matrices is needed to protect public health due to continued outbreaks of disease from contaminated foods and water. An Escherichia coli O157:H7 detection assay was designed using a robotic, fluorometric assay system. The system integrates optics, fluidics, robotics and software for the detection of foodborne pathogens or toxins in as many as four samples simultaneously. It utilizes disposable fiber optic waveguides coated with biotinylated antibodies for capture of target analytes from complex sample matrices. Computer-controlled rotation of sample cups allows complete contact between the sample and the waveguide. Detection occurs via binding of a fluorophore-labeled antibody to the captured target, which leads to an increase in the fluorescence signal. Assays are completed within twenty-five minutes. Sample matrices included buffer, retentate (material recovered from the filter of the Automated Concentration System (ACS) following hollow fiber ultrafiltration), spinach wash and ground beef. The matrices were spiked with E. coli O157:H7 (103-105 cells/ml) and the limits of detection were determined. The effect of sample rotation on assay sensitivity was also examined. Rotation parameters for each sample matrix included 10 ml with rotation, 5 ml with rotation and 0.1 ml without rotation. Detection occurred at 104 cells/ml in buffer and spinach wash and at 105 cells/ml in retentate and ground beef. Detection was greater for rotated samples in each matrix except ground beef. Enhanced detection of E. coli from large, rotated volumes of complex matrices was confirmed.
NASA Technical Reports Server (NTRS)
Kellner, A.
1987-01-01
Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.
The International Conference on Vector and Parallel Computing (2nd)
1989-01-17
Computation of the SVD of Bidiagonal Matrices" ...................................... 11 " Lattice QCD -As a Large Scale Scientific Computation...vectorizcd for the IBM 3090 Vector Facility. In addition, elapsed times " Lattice QCD -As a Large Scale Scientific have been reduced by using 3090...benchmarked Lattice QCD on a large number ofcompu- come from the wavefront solver routine. This was exten- ters: CrayX-MP and Cray 2 (vector
Large-scale silviculture experiments of western Oregon and Washington.
Nathan J. Poage; Paul D. Anderson
2007-01-01
We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...
Kumar, Sandeep; Kapoor, Aastha; Desai, Sejal; Inamdar, Mandar M.; Sen, Shamik
2016-01-01
Cancer cells manoeuvre through extracellular matrices (ECMs) using different invasion modes, including single cell and collective cell invasion. These modes rely on MMP-driven ECM proteolysis to make space for cells to move. How cancer-associated alterations in ECM influence the mode of invasion remains unclear. Further, the sensitivity of the two invasion modes to MMP dynamics remains unexplored. In this paper, we address these open questions using a multiscale hybrid computational model combining ECM density-dependent MMP secretion, MMP diffusion, ECM degradation by MMP and active cell motility. Our results demonstrate that in randomly aligned matrices, collective cell invasion is more efficient than single cell invasion. Although increase in MMP secretion rate enhances invasiveness independent of cell–cell adhesion, sustenance of collective invasion in dense matrices requires high MMP secretion rates. However, matrix alignment can sustain both single cell and collective cell invasion even without ECM proteolysis. Similar to our in-silico observations, increase in ECM density and MMP inhibition reduced migration of MCF-7 cells embedded in sandwich gels. Together, our results indicate that apart from cell intrinsic factors (i.e., high cell–cell adhesion and MMP secretion rates), ECM density and organization represent two important extrinsic parameters that govern collective cell invasion and invasion plasticity. PMID:26832069
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
Dense tissue-like collagen matrices formed in cell-free conditions.
Mosser, Gervaise; Anglo, Anny; Helary, Christophe; Bouligand, Yves; Giraud-Guille, Marie-Madeleine
2006-01-01
A new protocol was developed to produce dense organized collagen matrices hierarchically ordered on a large scale. It consists of a two stage process: (1) the organization of a collagen solution and (2) the stabilization of the organizations by a sol-gel transition that leads to the formation of collagen fibrils. This new protocol relies on the continuous injection of an acid-soluble collagen solution into glass microchambers. It leads to extended concentration gradients of collagen, ranging from 5 to 1000 mg/ml. The self-organization of collagen solutions into a wide array of spatial organizations was investigated. The final matrices obtained by this procedure varied in concentration, structure and density. Changes in the liquid state of the samples were followed by polarized light microscopy, and the final stabilized gel states obtained after fibrillogenesis were analyzed by both light and electron microscopy. Typical organizations extended homogeneously by up to three centimetres in one direction and several hundreds of micrometers in other directions. Fibrillogenesis of collagen solutions of high and low concentrations led to fibrils spatially arranged as has been described in bone and derm, respectively. Moreover, a relationship was revealed between the collagen concentration and the aggregation of and rotational angles between lateral fibrils. These results constitute a strong base from which to further develop highly enriched collagen matrices that could lead to substitutes that mimic connective tissues. The matrices thus obtained may also be good candidates for the study of the three-dimensional migration of cells.
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
The Restricted Isometry Property for Time-Frequency Structured Random Matrices
2011-06-16
tests illustrating the use of Ψg for compressive sensing are presented in [41]. They illustrate that empirically Ψg performs very similarly to a...E.J., J., Tao, T., Romberg , J.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans...Inform. Theory 52(2), 489–509 (2006) [12] Candès, E.J., Romberg , J., Tao, T.: Stable signal recovery from incomplete and inaccurate mea- surements. Comm
Generic pure quantum states as steady states of quasi-local dissipative dynamics
NASA Astrophysics Data System (ADS)
Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza
2018-04-01
We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.
Small-world bias of correlation networks: From brain to climate
NASA Astrophysics Data System (ADS)
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Small-world bias of correlation networks: From brain to climate.
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
A progress report on estuary modeling by the finite-element method
Gray, William G.
1978-01-01
Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
NASA Astrophysics Data System (ADS)
Kuijlaars, A. B. J.
2001-08-01
The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.
Electromagnetic Scattering by Spheroidal Volumes of Discrete Random Medium
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.
2017-01-01
We use the superposition T-matrix method to compare the far-field scattering matrices generated by spheroidal and spherical volumes of discrete random medium having the same volume and populated by identical spherical particles. Our results fully confirm the robustness of the previously identified coherent and diffuse scattering regimes and associated optical phenomena exhibited by spherical particulate volumes and support their explanation in terms of the interference phenomenon coupled with the order-of-scattering expansion of the far-field Foldy equations. We also show that increasing non-sphericity of particulate volumes causes discernible (albeit less pronounced) optical effects in forward and backscattering directions and explain them in terms of the same interference/multiple-scattering phenomenon.
Computer access security code system
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
Iterative algorithms for tridiagonal matrices on a WSI-multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajski, D.D.; Sameh, A.H.; Wisniewski, J.A.
1982-01-01
With the rapid advances in semiconductor technology, the construction of Wafer Scale Integration (WSI)-multiprocessors consisting of a large number of processors is now feasible. We illustrate the implementation of some basic linear algebra algorithms on such multiprocessors.
New description of charged particle propagation in random magnetic fields
NASA Technical Reports Server (NTRS)
Earl, James A.
1994-01-01
When charged particles spiral along a large constant magnetic field, their trajectories are scattered by random components that are superposed on the guiding field. In the simplest analysis of this situation, scattering causes the particles to diffuse parallel to the guiding field. At the next level of approximation, moving pulses that correspond to a coherent mode of propagation are present, but they are represented by delta-functions whose infinitely narrow width makes no sense physically and is inconsistent with the finite duration of coherent pulses observed in solar energetic particle events. To derive a more realistic description, the transport problem is formulated in terms of 4 x 4 matrices, which derive from a representation of the particle distribution function in terms of eigenfunctions of the scattering operator, and which lead to useful approximations that give explicit predictions of the detailed evolution not only of the coherent pulses, but also of the diffusive wake. More specifically, the new description embodies a simple convolution of a narrow Gaussian with the solutions above that involve delta-functions, but with a slightly reduced coherent velocity. The validity of these approximations, which can easily be calculated on a desktop computer, has been exhaustively confirmed by comparison with results of Monte Carlo simulations which kept track of 50 million particles and which were carried out on the Maspar computer at Goddard Space Flight Center.
High-Dimensional Bayesian Geostatistics
Banerjee, Sudipto
2017-01-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920
High-Dimensional Bayesian Geostatistics.
Banerjee, Sudipto
2017-06-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.
NASA Astrophysics Data System (ADS)
Limkumnerd, Surachate
2014-03-01
Interest in thin-film fabrication for industrial applications have driven both theoretical and computational aspects of modeling its growth. One of the earliest attempts toward understanding the morphological structure of a film's surface is through a class of solid-on-solid limited-mobility growth models such as the Family, Wolf-Villain, or Das Sarma-Tamborenea models, which have produced fascinating surface roughening behaviors. These models, however, restrict the motion of an incidence atom to be within the neighborhood of its landing site, which renders them inept for simulating long-distance surface diffusion such as that observed in thin-film growth using a molecular-beam epitaxy technique. Naive extension of these models by repeatedly applying the local diffusion rules for each hop to simulate large diffusion length can be computationally very costly when certain statistical aspects are demanded. We present a graph-theoretic approach to simulating a long-range diffusion-attachment growth model. Using the Markovian assumption and given a local diffusion bias, we derive the transition probabilities for a random walker to traverse from one lattice site to the others after a large, possibly infinite, number of steps. Only computation with linear-time complexity is required for the surface morphology calculation without other probabilistic measures. The formalism is applied, as illustrations, to simulate surface growth on a two-dimensional flat substrate and around a screw dislocation under the modified Wolf-Villain diffusion rule. A rectangular spiral ridge is observed in the latter case with a smooth front feature similar to that obtained from simulations using the well-known multiple registration technique. An algorithm for computing the inverse of a class of substochastic matrices is derived as a corollary.
NASA Astrophysics Data System (ADS)
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
Comprehensive Review of the Impact of Dairy Foods and Dairy Fat on Cardiometabolic Risk123
Drouin-Chartier, Jean-Philippe; Côté, Julie Anne; Labonté, Marie-Ève; Brassard, Didier; Tessier-Grenier, Maude; Desroches, Sophie; Couture, Patrick; Lamarche, Benoît
2016-01-01
Because regular-fat dairy products are a major source of cholesterol-raising saturated fatty acids (SFAs), current US and Canadian dietary guidelines for cardiovascular health recommend the consumption of low-fat dairy products. Yet, numerous randomized controlled trials (RCTs) have reported rather mixed effects of reduced- and regular-fat dairy consumption on blood lipid concentrations and on many other cardiometabolic disease risk factors, such as blood pressure and inflammation markers. Thus, the focus on low-fat dairy in current dietary guidelines is being challenged, creating confusion within health professional circles and the public. This narrative review provides perspective on the research pertaining to the impact of dairy consumption and dairy fat on traditional and emerging cardiometabolic disease risk factors. This comprehensive assessment of evidence from RCTs suggests that there is no apparent risk of potential harmful effects of dairy consumption, irrespective of the content of dairy fat, on a large array of cardiometabolic variables, including lipid-related risk factors, blood pressure, inflammation, insulin resistance, and vascular function. This suggests that the purported detrimental effects of SFAs on cardiometabolic health may in fact be nullified when they are consumed as part of complex food matrices such as those in cheese and other dairy foods. Thus, the focus on low-fat dairy products in current guidelines apparently is not entirely supported by the existing literature and may need to be revisited on the basis of this evidence. Future studies addressing key research gaps in this area will be extremely informative to better appreciate the impact of dairy food matrices, as well as dairy fat specifically, on cardiometabolic health. PMID:28140322
NASA Astrophysics Data System (ADS)
Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.
2017-12-01
Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Fast Readout Architectures for Large Arrays of Digital Pixels: Examples and Applications
Gabrielli, A.
2014-01-01
Modern pixel detectors, particularly those designed and constructed for applications and experiments for high-energy physics, are commonly built implementing general readout architectures, not specifically optimized in terms of speed. High-energy physics experiments use bidimensional matrices of sensitive elements located on a silicon die. Sensors are read out via other integrated circuits bump bonded over the sensor dies. The speed of the readout electronics can significantly increase the overall performance of the system, and so here novel forms of readout architectures are studied and described. These circuits have been investigated in terms of speed and are particularly suited for large monolithic, low-pitch pixel detectors. The idea is to have a small simple structure that may be expanded to fit large matrices without affecting the layout complexity of the chip, while maintaining a reasonably high readout speed. The solutions might be applied to devices for applications not only in physics but also to general-purpose pixel detectors whenever online fast data sparsification is required. The paper presents also simulations on the efficiencies of the systems as proof of concept for the proposed ideas. PMID:24778588
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
Delatorre, Carolina; Rodríguez, Ana; Rodríguez, Lucía; Majada, Juan P; Ordás, Ricardo J; Feito, Isabel
2017-01-01
Plant growth regulators (PGRs) are very different chemical compounds that play essential roles in plant development and the regulation of physiological processes. They exert their functions by a mechanism called cross-talk (involving either synergistic or antagonistic actions) thus; it is for great interest to study as many PGRs as possible to obtain accurate information about plant status. Much effort has been applied to develop methods capable of analyze large numbers of these compounds but frequently excluding some chemical families or important PGRs within each family. In addition, most of the methods are specially designed for matrices easy to work with. Therefore, we wanted to develop a method which achieved the requirements lacking in the literature and also being fast and reliable. Here we present a simple, fast and robust method for the extraction and quantification of 20 different PGRs using UHPLC-MS/MS optimized in complex matrices. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Cheng, Christina W.; Solorio, Loran D.; Alsberg, Eben
2014-01-01
The reconstruction of musculoskeletal defects is a constant challenge for orthopaedic surgeons. Musculoskeletal injuries such as fractures, chondral lesions, infections and tumor debulking can often lead to large tissue voids requiring reconstruction with tissue grafts. Autografts are currently the gold standard in orthopaedic tissue reconstruction; however, there is a limit to the amount of tissue that can be harvested before compromising the donor site. Tissue engineering strategies using allogeneic or xenogeneic decellularized bone, cartilage, skeletal muscle, tendon and ligament have emerged as promising potential alternative treatment. The extracellular matrix provides a natural scaffold for cell attachment, proliferation and differentiation. Decellularization of in vitro cell-derived matrices can also enable the generation of autologous constructs from tissue specific cells or progenitor cells. Although decellularized bone tissue is widely used clinically in orthopaedic applications, the exciting potential of decellularized cartilage, skeletal muscle, tendon and ligament cell-derived matrices has only recently begun to be explored for ultimate translation to the orthopaedic clinic. PMID:24417915
Arctic curves in path models from the tangent method
NASA Astrophysics Data System (ADS)
Di Francesco, Philippe; Lapa, Matthew F.
2018-04-01
Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Random Matrix Theory in molecular dynamics analysis.
Palese, Luigi Leonardo
2015-01-01
It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.
Spatio-temporal Hotelling observer for signal detection from image sequences
Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.
2010-01-01
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494
Spatio-temporal Hotelling observer for signal detection from image sequences.
Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J
2009-06-22
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
1979-07-01
nel for low-frequency filters with Lg92/Lgl = 4.4 for S/N1 = 1. 71. CASE 1, S/NF= 1, S/I1 1 M=5 M1=10 M1=20 C! a ip a a a a 00 2 3Ma a0 0 M M 3...Gantmacher, The Theory of Matrices, VoZ . 1, Chelsea Publishing Co., New York, NJ.Y., 1959. 163
Mechanical environmental test program for the Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Buckingham, R.; Sharp, G. R.
1974-01-01
This paper describes the spacecraft and subsystem level mechanical environmental test program which was developed for the Communications Technology Satellite (CTS). At the spacecraft level it includes sine and random vibration, static loading, centrifuge loading, pyrotechnic and separation shock simulation and (tentatively) acoustics. At the subsystem level it entails the same type of environmental exposure as applicable. Matrices of system and subsystem tests are presented showing type, level and hardware status for each major test.
A computational proposal for designing structured RNA pools for in vitro selection of RNAs.
Kim, Namhee; Gan, Hin Hark; Schlick, Tamar
2007-04-01
Although in vitro selection technology is a versatile experimental tool for discovering novel synthetic RNA molecules, finding complex RNA molecules is difficult because most RNAs identified from random sequence pools are simple motifs, consistent with recent computational analysis of such sequence pools. Thus, enriching in vitro selection pools with complex structures could increase the probability of discovering novel RNAs. Here we develop an approach for engineering sequence pools that links RNA sequence space regions with corresponding structural distributions via a "mixing matrix" approach combined with a graph theory analysis. We define five classes of mixing matrices motivated by covariance mutations in RNA; these constructs define nucleotide transition rates and are applied to chosen starting sequences to yield specific nonrandom pools. We examine the coverage of sequence space as a function of the mixing matrix and starting sequence via clustering analysis. We show that, in contrast to random sequences, which are associated only with a local region of sequence space, our designed pools, including a structured pool for GTP aptamers, can target specific motifs. It follows that experimental synthesis of designed pools can benefit from using optimized starting sequences, mixing matrices, and pool fractions associated with each of our constructed pools as a guide. Automation of our approach could provide practical tools for pool design applications for in vitro selection of RNAs and related problems.
Korsholm, Anne Sofie; Kjær, Thomas Nordstrøm; Ornstrup, Marie Juul; Pedersen, Steen Bønløkke
2017-03-04
Resveratrol possesses several beneficial metabolic effects in rodents, while the effects of resveratrol in humans remain unclear. Therefore, we performed a non-targeted comprehensive metabolomic analysis on blood, urine, adipose tissue, and skeletal muscle tissue in middle-aged men with metabolic syndrome randomized to either resveratrol or placebo treatment for four months. Changes in steroid hormones across all four matrices were the most pronounced changes observed. Resveratrol treatment reduced sulfated androgen precursors in blood, adipose tissue, and muscle tissue, and increased these metabolites in urine. Furthermore, markers of muscle turnover were increased and lipid metabolism was affected, with increased intracellular glycerol and accumulation of long-chain saturated, monounsaturated, and polyunsaturated (n3 and n6) free fatty acids in resveratrol-treated men. Finally, urinary derivatives of aromatic amino acids, which mainly reflect the composition of the gut microbiota, were altered upon resveratrol treatment. In conclusion, the non-targeted metabolomics approach applied to four different matrices provided evidence of subtle but robust effects on several metabolic pathways following resveratrol treatment for four months in men with metabolic syndrome-effects that, for the most part, would not have been detected by routine analyses. The affected pathways should be the focus of future clinical trials on resveratrol's effects, and perhaps particularly the areas of steroid metabolism and the gut microbiome.
Chen, Xi; Zhao, Liu; Özdemir, Mujgan Sagir; Liang, Haiming
2018-04-05
The resource allocation of air pollution treatment in China is a complex problem, since many alternatives are available and many criteria influence mutually. A number of stakeholders participate in this issue holding different opinions because of the benefits they value. So a method is needed, based on the analytic network process (ANP) and large-group decision-making (LGDM), to rank the alternatives considering interdependent criteria and stakeholders' opinions. In this method, the criteria related to air pollution treatment are examined by experts. Then, the network structure of the problem is constructed based on the relationships between the criteria. Further, every participant in each group provide comparison matrices by judging the importance between criteria according to dominance, regarding a certain criteria (or goal), and the geometric average comparison matrix of each group is obtained. The decision weight of each group is derived by combining the subjective weight and the objective weight, in which the subjective weight is provided by organizers, while the objective weight is determined by considering the consensus levels of groups. The final comparison matrices are obtained by the geometric average of comparison matrices and the decision weights. Next, the resource allocation is made according to the priorities of the alternatives using the super decision software. Finally, an example is given to illustrate the use of the proposed method.
Numerical Optimization Algorithms and Software for Systems Biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.
Bjoerke-Bertheussen, Jeanette; Schoeyen, Helle; Andreassen, Ole A; Malt, Ulrik F; Oedegaard, Ketil J; Morken, Gunnar; Sundet, Kjetil; Vaaler, Arne E; Auestad, Bjoern; Kessler, Ute
2017-12-21
Electroconvulsive therapy is an effective treatment for bipolar depression, but there are concerns about whether it causes long-term neurocognitive impairment. In this multicenter randomized controlled trial, in-patients with treatment-resistant bipolar depression were randomized to either algorithm-based pharmacologic treatment or right unilateral electroconvulsive therapy. After the 6-week treatment period, all of the patients received maintenance pharmacotherapy as recommended by their clinician guided by a relevant treatment algorithm. Patients were assessed at baseline and at 6 months. Neurocognitive functions were assessed using the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) Consensus Cognitive Battery, and autobiographical memory consistency was assessed using the Autobiographical Memory Interview-Short Form. Seventy-three patients entered the trial, of whom 51 and 26 completed neurocognitive assessments at baseline and 6 months, respectively. The MATRICS Consensus Cognitive Battery composite score improved by 4.1 points in both groups (P = .042) from baseline to 6 months (from 40.8 to 44.9 and from 41.9 to 46.0 in the algorithm-based pharmacologic treatment and electroconvulsive therapy groups, respectively). The Autobiographical Memory Interview-Short Form consistency scores were reduced in both groups (72.3% vs 64.3% in the algorithm-based pharmacologic treatment and electroconvulsive therapy groups, respectively; P = .085). This study did not find that right unilateral electroconvulsive therapy caused long-term impairment in neurocognitive functions compared to algorithm-based pharmacologic treatment in bipolar depression as measured using standard neuropsychological tests, but due to the low number of patients in the study the results should be interpreted with caution. ClinicalTrials.gov: NCT00664976. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping
2018-03-01
This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Pruning a minimum spanning tree
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas
2012-04-01
This work employs various techniques in order to filter random noise from the information provided by minimum spanning trees obtained from the correlation matrices of international stock market indices prior to and during times of crisis. The first technique establishes a threshold above which connections are considered affected by noise, based on the study of random networks with the same probability density distribution of the original data. The second technique is to judge the strength of a connection by its survival rate, which is the amount of time a connection between two stock market indices endures. The idea is that true connections will survive for longer periods of time, and that random connections will not. That information is then combined with the information obtained from the first technique in order to create a smaller network, in which most of the connections are either strong or enduring in time.
Correlation Energies from the Two-Component Random Phase Approximation.
Kühn, Michael
2014-02-11
The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.
Sports drug testing using complementary matrices: Advantages and limitations.
Thevis, Mario; Geyer, Hans; Tretzel, Laura; Schänzer, Wilhelm
2016-10-25
Today, routine doping controls largely rely on testing whole blood, serum, and urine samples. These matrices allow comprehensively covering inorganic as well as low and high molecular mass organic analytes relevant to doping controls and are collecting and transferring from sampling sites to accredited anti-doping laboratories under standardized conditions. Various aspects including time and cost-effectiveness as well as intrusiveness and invasiveness of the sampling procedure but also analyte stability and breadth of the contained information have been motivation to consider and assess values potentially provided and added to modern sports drug testing programs by alternative matrices. Such alternatives could be dried blood spots (DBS), dried plasma spots (DPS), oral fluid (OF), exhaled breath (EB), and hair. In this review, recent developments and test methods concerning these alternative matrices and expected or proven contributions as well as limitations of these specimens in the context of the international anti-doping fight are presented and discussed, guided by current regulations for prohibited substances and methods of doping as established by the World Anti-Doping Agency (WADA). Focusing on literature published between 2011 and 2015, examples for doping control analytical assays concerning non-approved substances, anabolic agents, peptide hormones/growth factors/related substances and mimetics, β 2 -agonists, hormone and metabolic modulators, diuretics and masking agents, stimulants, narcotics, cannabinoids, glucocorticoids, and beta-blockers were selected to outline the advantages and limitations of the aforementioned alternative matrices as compared to conventional doping control samples (i.e. urine and blood/serum). Copyright © 2016 Elsevier B.V. All rights reserved.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.
2018-06-01
We suggest a method of studying the joint probability density (JPD) of an eigenvalue and the associated `non-orthogonality overlap factor' (also known as the `eigenvalue condition number') of the left and right eigenvectors for non-selfadjoint Gaussian random matrices of size {N× N} . First we derive the general finite N expression for the JPD of a real eigenvalue {λ} and the associated non-orthogonality factor in the real Ginibre ensemble, and then analyze its `bulk' and `edge' scaling limits. The ensuing distribution is maximally heavy-tailed, so that all integer moments beyond normalization are divergent. A similar calculation for a complex eigenvalue z and the associated non-orthogonality factor in the complex Ginibre ensemble is presented as well and yields a distribution with the finite first moment. Its `bulk' scaling limit yields a distribution whose first moment reproduces the well-known result of Chalker and Mehlig (Phys Rev Lett 81(16):3367-3370, 1998), and we provide the `edge' scaling distribution for this case as well. Our method involves evaluating the ensemble average of products and ratios of integer and half-integer powers of characteristic polynomials for Ginibre matrices, which we perform in the framework of a supersymmetry approach. Our paper complements recent studies by Bourgade and Dubach (The distribution of overlaps between eigenvectors of Ginibre matrices, 2018. arXiv:1801.01219).
Vertices cannot be hidden from quantum spatial search for almost all random graphs
NASA Astrophysics Data System (ADS)
Glos, Adam; Krawiec, Aleksandra; Kukulski, Ryszard; Puchała, Zbigniew
2018-04-01
In this paper, we show that all nodes can be found optimally for almost all random Erdős-Rényi G(n,p) graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires p=ω (log ^8(n)/n), while the second requires p≥ (1+ɛ )log (n)/n, where ɛ >0. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the \\Vert \\cdot \\Vert _∞ norm. At the same time for p<(1-ɛ )log (n)/n, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.
Spectra of Adjacency Matrices in Networks of Extreme Introverts and Extroverts
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Ezzatabadipour, Mohammadmehdi; Zia, R. K. P.
Many interesting properties were discovered in recent studies of preferred degree networks, suitable for describing social behavior of individuals who tend to prefer a certain number of contacts. In an extreme version (coined the XIE model), introverts always cut links while extroverts always add them. While the intra-group links are static, the cross-links are dynamic and lead to an ensemble of bipartite graphs, with extraordinary correlations between elements of the incidence matrix: nij In the steady state, this system can be regarded as one in thermal equilibrium with long-ranged interactions between the nij's, and displays an extreme Thouless effect. Here, we report simulation studies of a different perspective of networks, namely, the spectra associated with this ensemble of adjacency matrices {aij } . As a baseline, we first consider the spectra associated with a simple random (Erdős-Rényi) ensemble of bipartite graphs, where simulation results can be understood analytically. Work supported by the NSF through Grant DMR-1507371.
Covariance structure in the skull of Catarrhini: a case of pattern stasis and magnitude evolution.
de Oliveira, Felipe Bandoni; Porto, Arthur; Marroig, Gabriel
2009-04-01
The study of the genetic variance/covariance matrix (G-matrix) is a recent and fruitful approach in evolutionary biology, providing a window of investigating for the evolution of complex characters. Although G-matrix studies were originally conducted for microevolutionary timescales, they could be extrapolated to macroevolution as long as the G-matrix remains relatively constant, or proportional, along the period of interest. A promising approach to investigating the constancy of G-matrices is to compare their phenotypic counterparts (P-matrices) in a large group of related species; if significant similarity is found among several taxa, it is very likely that the underlying G-matrices are also equivalent. Here we study the similarity of covariance and correlation structure in a broad sample of Old World monkeys and apes (Catarrhini). We made phylogenetically structured comparisons of correlation and covariance matrices derived from 39 skull traits, ranging from between species to the superfamily level. We also compared the overall magnitude of integration between skull traits (r2) for all Catarrhini genera. Our results show that P-matrices were not strictly constant among catarrhines, but the amount of divergence observed among taxa was generally low. There was significant and positive correlation between the amount of divergence in correlation and covariance patterns among the 30 genera and their phylogenetic distances derived from a recently proposed phylogenetic hypothesis. Our data demonstrate that the P-matrices remained relatively similar along the evolutionary history of catarrhines, and comparisons with the G-matrix available for a New World monkey genus (Saguinus) suggests that the same holds for all anthropoids. The magnitude of integration, in contrast, varied considerably among genera, indicating that evolution of the magnitude, rather than the pattern of inter-trait correlations, might have played an important role in the diversification of the catarrhine skull.
Nia, Yacine; Mutel, Isabelle; Assere, Adrien; Lombard, Bertrand; Auvray, Frederic; Hennekinne, Jacques-Antoine
2016-04-13
Staphylococcal food poisoning outbreaks are a major cause of foodborne illnesses in Europe and their notifications have been mandatory since 2005. Even though the European regulation on microbiological criteria for food defines a criterion on staphylococcal enterotoxin (SE) only in cheese and dairy products, European Food Safety Authority (EFSA) data reported that various types of food matrices are involved in staphylococcal food poisoning outbreaks. The European Screening Method (ESM) of European Union Reference Laboratory for Coagulase Positive Staphylococci (EURL CPS) was validated in 2011 for SE detection in food matrices and is currently the official method used for screening purposes in Europe. In this context, EURLCPS is annually organizing Inter-Laboratory Proficiency Testing Trials (ILPT) to evaluate the competency of the European countries' National Reference Laboratories (NRLs) to analyse SE content in food matrices. A total of 31 NRLs representing 93% of European countries participated in these ILPTs. Eight food matrices were used for ILPT over the period 2013-2015, including cheese, freeze-dried cheese, tuna, mackerel, roasted chicken, ready-to-eat food, milk, and pastry. Food samples were spiked with four SE types (i.e., SEA, SEC, SED, and SEE) at various concentrations. Homogeneity and stability studies showed that ILPT samples were both homogeneous and stable. The analysis of results obtained by participants for a total of 155 blank and 620 contaminated samples allowed for evaluation of trueness (>98%) and specificity (100%) of ESM. Further to the validation study of ESM carried out in 2011, these three ILPTs allowed for the assessment of the proficiency of the NRL network and the performance of ESM on a large variety of food matrices and samples. The ILPT design presented here will be helpful for the organization of ILPT on SE detection by NRLs or other expert laboratories.
Estimated correlation matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, Szilárd; Kondor, Imre
2004-11-01
Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.
Akdemir, Hülya; Suzerer, Veysel; Tilkat, Engin; Onay, Ahmet; Çiftçi, Yelda Ozden
2016-12-01
Determination of genetic stability of in vitro-grown plantlets is needed for safe and large-scale production of mature trees. In this study, genetic variation of long-term micropropagated mature pistachio developed through direct shoot bud regeneration using apical buds (protocol A) and in vitro-derived leaves (protocol B) was assessed via DNA-based molecular markers. Randomly amplified polymorphic DNA (RAPD), inter-simple sequence repeat (ISSR), and amplified fragment length polymorphism (AFLP) were employed, and the obtained PIC values from RAPD (0.226), ISSR (0.220), and AFLP (0.241) showed that micropropagation of pistachio for different periods of time resulted in "reasonable polymorphism" among donor plant and its 18 clones. Mantel's test showed a consistence polymorphism level between marker systems based on similarity matrices. In conclusion, this is the first study on occurrence of genetic variability in long-term micropropagated mature pistachio plantlets. The obtained results clearly indicated that different marker approaches used in this study are reliable for assessing tissue culture-induced variations in long-term cultured pistachio plantlets.
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.
2015-03-01
Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.
Collection of quantitative chemical release field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demirgian, J.; Macha, S.; Loyola Univ.
1999-01-01
Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less
NASA Technical Reports Server (NTRS)
Buchholz, Peter; Ciardo, Gianfranco; Donatelli, Susanna; Kemper, Peter
1997-01-01
We present a systematic discussion of algorithms to multiply a vector by a matrix expressed as the Kronecker product of sparse matrices, extending previous work in a unified notational framework. Then, we use our results to define new algorithms for the solution of large structured Markov models. In addition to a comprehensive overview of existing approaches, we give new results with respect to: (1) managing certain types of state-dependent behavior without incurring extra cost; (2) supporting both Jacobi-style and Gauss-Seidel-style methods by appropriate multiplication algorithms; (3) speeding up algorithms that consider probability vectors of size equal to the "actual" state space instead of the "potential" state space.
Power laws for the backscattering matrices in the case of lidar sensing of cirrus clouds
NASA Astrophysics Data System (ADS)
Kustova, Natalia V.; Konoshonkin, Alexander V.; Borovoi, Anatoli; Okamoto, Hajime; Sato, Kaori; Katagiri, Shuichiro
2017-11-01
The data bank for the backscattering matrixes of cirrus clouds that was calculated earlier by the authors and was available in the internet for free access has been replaced in the case of randomly oriented crystals by simple analytic equations. Four microphysical ratios conventionally measured by lidars have been calculated for different shapes and the effective size of the crystals. These values could be used for retrieving shapes of the crystals in cirrus clouds.
1981-01-01
Channel and study permutation codes as a special case. ,uch a code is generated by an initial vector x, a group G of orthogonal n by n matrices, and a...random-access components, is introduced and studied . Under this scheme, the network stations are divided into groups , each of which is assigned a...IEEE INFORMATION THEORY GROUP CO-SPONSORED BY: UNION RADIO SCIENTIFIQUE INTERNATIONALE IEEE Catalog Number 81 CH 1609-7 IT . 81 ~20 04Q SECURITY
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Control of large flexible systems via eigenvalue relocation
NASA Technical Reports Server (NTRS)
Denman, E. D.; Jeon, G. J.
1985-01-01
For the vibration control of large flexible systems, a control scheme by which the eigenvalues of the closed-loop systems are assigned to predetermined locations within the feasible region through velocity-only feedback is presented. Owing to the properties of second-order lambda-matrices and an efficient model decoupling technique, the control scheme makes it possible that selected modes are damped with the rest of the modes unchanged.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
An algebraic equation solution process formulated in anticipation of banded linear equations.
DOT National Transportation Integrated Search
1971-01-01
A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...
An Analysis of the Max-Min Texture Measure.
1982-01-01
PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
Microstrip Butler matrix design and realization for 7 T MRI.
Yazdanbakhsh, Pedram; Solbach, Klaus
2011-07-01
This article presents the design and realization of 8 × 8 and 16 × 16 Butler matrices for 7 T MRI systems. With the focus on low insertion loss and high amplitude/phase accuracy, the microstrip line integration technology (microwave-integrated circuit) was chosen for the realization. Laminate material of high permittivity (ε(r) = 11) and large thickness (h = 3.2 mm) is shown to allow the best trade-off of circuit board size versus insertion loss, saving circuit area by extensive folding of branch-line coupler topology and meandering phase shifter and connecting strip lines and reducing mutual coupling of neighboring strip lines by shield structures between strip lines. With this approach, 8 × 8 Butler matrices were produced in single boards of 310 mm × 530 mm, whereas the 16 × 16 Butler matrices combined two submatrices of 8 × 8 with two smaller boards. Insertion loss was found at 0.73 and 1.1 dB for an 8 × 8 matrix and 16 × 16 matrix, respectively. Measured amplitude and phase errors are shown to represent highly pure mode excitation with unwanted modes suppressed by 40 and 35 dB, respectively. Both types of matrices were implemented with a 7 T MRI system and 8- and 16-element coil arrays for RF mode shimming experiments and operated successfully with 8 kW of RF power. Copyright © 2011 Wiley-Liss, Inc.
Dececchi, T Alex; Mabee, Paula M; Blackburn, David C
2016-01-01
Databases of organismal traits that aggregate information from one or multiple sources can be leveraged for large-scale analyses in biology. Yet the differences among these data streams and how well they capture trait diversity have never been explored. We present the first analysis of the differences between phenotypes captured in free text of descriptive publications ('monographs') and those used in phylogenetic analyses ('matrices'). We focus our analysis on osteological phenotypes of the limbs of four extinct vertebrate taxa critical to our understanding of the fin-to-limb transition. We find that there is low overlap between the anatomical entities used in these two sources of phenotype data, indicating that phenotypes represented in matrices are not simply a subset of those found in monographic descriptions. Perhaps as expected, compared to characters found in matrices, phenotypes in monographs tend to emphasize descriptive and positional morphology, be somewhat more complex, and relate to fewer additional taxa. While based on a small set of focal taxa, these qualitative and quantitative data suggest that either source of phenotypes alone will result in incomplete knowledge of variation for a given taxon. As a broader community develops to use and expand databases characterizing organismal trait diversity, it is important to recognize the limitations of the data sources and develop strategies to more fully characterize variation both within species and across the tree of life.
Jia, Hongjun; Martinez, Aleix M
2009-05-01
The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.
Prediction of Ras-effector interactions using position energy matrices.
Kiel, Christina; Serrano, Luis
2007-09-01
One of the more challenging problems in biology is to determine the cellular protein interaction network. Progress has been made to predict protein-protein interactions based on structural information, assuming that structural similar proteins interact in a similar way. In a previous publication, we have determined a genome-wide Ras-effector interaction network based on homology models, with a high accuracy of predicting binding and non-binding domains. However, for a prediction on a genome-wide scale, homology modelling is a time-consuming process. Therefore, we here successfully developed a faster method using position energy matrices, where based on different Ras-effector X-ray template structures, all amino acids in the effector binding domain are sequentially mutated to all other amino acid residues and the effect on binding energy is calculated. Those pre-calculated matrices can then be used to score for binding any Ras or effector sequences. Based on position energy matrices, the sequences of putative Ras-binding domains can be scanned quickly to calculate an energy sum value. By calibrating energy sum values using quantitative experimental binding data, thresholds can be defined and thus non-binding domains can be excluded quickly. Sequences which have energy sum values above this threshold are considered to be potential binding domains, and could be further analysed using homology modelling. This prediction method could be applied to other protein families sharing conserved interaction types, in order to determine in a fast way large scale cellular protein interaction networks. Thus, it could have an important impact on future in silico structural genomics approaches, in particular with regard to increasing structural proteomics efforts, aiming to determine all possible domain folds and interaction types. All matrices are deposited in the ADAN database (http://adan-embl.ibmc.umh.es/). Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Eichinger, Benjamin
2016-07-01
We recall criteria on the spectrum of Jacobi matrices such that the corresponding isospectral torus consists of periodic operators. Motivated by those known results for Jacobi matrices, we define a new class of operators called GMP matrices. They form a certain Generalization of matrices related to the strong Moment Problem. This class allows us to give a parametrization of almost periodic finite gap Jacobi matrices by periodic GMP matrices. Moreover, due to their structural similarity we can carry over numerous results from the direct and inverse spectral theory of periodic Jacobi matrices to the class of periodic GMP matrices. In particular, we prove an analogue of the remarkable ''magic formula'' for this new class.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Analysis and imaging of biocidal agrochemicals using ToF-SIMS.
Converso, Valerio; Fearn, Sarah; Ware, Ecaterina; McPhail, David S; Flemming, Anthony J; Bundy, Jacob G
2017-09-06
ToF-SIMS has been increasingly widely used in recent years to look at biological matrices, in particular for biomedical research, although there is still a lot of development needed to maximise the value of this technique in the life sciences. The main issue for biological matrices is the complexity of the mass spectra and therefore the difficulty to specifically and precisely detect analytes in the biological sample. Here we evaluated the use of ToF-SIMS in the agrochemical field, which remains a largely unexplored area for this technique. We profiled a large number of biocidal active ingredients (herbicides, fungicides, and insecticides); we then selected fludioxonil, a halogenated fungicide, as a model compound for more detailed study, including the effect of co-occurring biomolecules on detection limits. There was a wide range of sensitivity of the ToF-SIMS for the different active ingredient compounds, but fludioxonil was readily detected in real-world samples (wheat seeds coated with a commercial formulation). Fludioxonil did not penetrate the seed to any great depth, but was largely restricted to a layer coating the seed surface. ToF-SIMS has clear potential as a tool for not only detecting biocides in biological samples, but also mapping their distribution.
PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.
1997-01-01
The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.
de la Calle, Maria B; Devesa, Vicenta; Fiamegos, Yiannis; Vélez, Dinoraz
2017-09-01
The European Food Safety Authority (EFSA) underlined in its Scientific Opinion on Arsenic in Food that in order to support a sound exposure assessment to inorganic arsenic through diet, information about distribution of arsenic species in various food types must be generated. A method, previously validated in a collaborative trial, has been applied to determine inorganic arsenic in a wide variety of food matrices, covering grains, mushrooms and food of marine origin (31 samples in total). The method is based on detection by flow injection-hydride generation-atomic absorption spectrometry of the iAs selectively extracted into chloroform after digestion of the proteins with concentrated HCl. The method is characterized by a limit of quantification of 10 µg/kg dry weight, which allowed quantification of inorganic arsenic in a large amount of food matrices. Information is provided about performance scores given to results obtained with this method and which were reported by different laboratories in several proficiency tests. The percentage of satisfactory results obtained with the discussed method is higher than that of the results obtained with other analytical approaches.
Efficient Numerical Diagonalization of Hermitian 3 × 3 Matrices
NASA Astrophysics Data System (ADS)
Kopp, Joachim
A very common problem in science is the numerical diagonalization of symmetric or hermitian 3 × 3 matrices. Since standard "black box" packages may be too inefficient if the number of matrices is large, we study several alternatives. We consider optimized implementations of the Jacobi, QL, and Cuppen algorithms and compare them with an alytical method relying on Cardano's formula for the eigenvalues and on vector cross products for the eigenvectors. Jacobi is the most accurate, but also the slowest method, while QL and Cuppen are good general purpose algorithms. The analytical algorithm outperforms the others by more than a factor of 2, but becomes inaccurate or may even fail completely if the matrix entries differ greatly in magnitude. This can mostly be circumvented by using a hybrid method, which falls back to QL if conditions are such that the analytical calculation might become too inaccurate. For all algorithms, we give an overview of the underlying mathematical ideas, and present detailed benchmark results. C and Fortran implementations of our code are available for download from .
Novel opportunities and challenges offered by nanobiomaterials in tissue engineering
Gelain, Fabrizio
2008-01-01
Over the last decades, tissue engineering has demonstrated an unquestionable potential to regenerate damaged tissues and organs. Some tissue-engineered solutions recently entered the clinics (eg, artificial bladder, corneal epithelium, engineered skin), but most of the pathologies of interest are still far from being solved. The advent of stem cells opened the door to large-scale production of “raw living matter” for cell replacement and boosted the overall sector in the last decade. Still reliable synthetic scaffolds fairly resembling the nanostructure of extracellular matrices, showing mechanical properties comparable to those of the tissues to be regenerated and capable of being modularly functionalized with biological active motifs, became feasible only in the last years thanks to newly introduced nanotechnology techniques of material design, synthesis, and characterization. Nanostructured synthetic matrices look to be the next generation scaffolds, opening new powerful pathways for tissue regeneration and introducing new challenges at the same time. We here present a detailed overview of the advantages, applications, and limitations of nanostructured matrices with a focus on both electrospun and self-assembling scaffolds. PMID:19337410
Leaching of heavy metals from cementitious composites made of new ternary cements
NASA Astrophysics Data System (ADS)
Kuterasińska-Warwas, Justyna; Król, Anna
2017-10-01
The paper presents a comparison of research methods concerning the leaching of harmful substances (selected heavy metal cations ie. Pb, Cu, Zn and Cr) and their degree of immobilization in cement matrices. The new types of ternary cements were used in the study, where a large proportion of cement clinker was replaced by other non-clinker components - industrial wastes, ie. siliceous fly ash from power industry and granulated blast furnace slag from the iron and steel industry. In studied cementitious binders also ground limestone was used, which is a widely available raw material. The aim of research is determining the suitability of new cements for neutralizing harmful substances in the obtained matrices. The application of two research methods in accordance with EN 12457-4 and NEN 7275 intends to reflection of changing environmental conditions whom composite materials may actually undergo during their exploitation or storing on landfills. The results show that cements with high addition of non-clinker components are suitable for stabilization of toxic substances and the obtained cement matrices retain a high degree of immobilization of heavy metals at the level of 99%.
Vivant, Anne-Laure; Desneux, Jeremy; Pourcher, Anne-Marie; Piveteau, Pascal
2017-01-01
Understanding how Listeria monocytogenes, the causative agent of listeriosis, adapts to the environment is crucial. Adaptation to new matrices requires regulation of gene expression. To determine how the pathogen adapts to lagoon effluent and soil, two matrices where L. monocytogenes has been isolated, we compared the transcriptomes of L. monocytogenes CIP 110868 20 min and 24 h after its transfer to effluent and soil extract. Results showed major variations in the transcriptome of L. monocytogenes in the lagoon effluent but only minor modifications in the soil. In both the lagoon effluent and in the soil, genes involved in mobility and chemotaxis and in the transport of carbohydrates were the most frequently represented in the set of genes with higher transcript levels, and genes with phage-related functions were the most represented in the set of genes with lower transcript levels. A modification of the cell envelop was only found in the lagoon environment. Finally, the differential analysis included a large proportion of regulators, regulons, and ncRNAs. PMID:29018416
Vivant, Anne-Laure; Desneux, Jeremy; Pourcher, Anne-Marie; Piveteau, Pascal
2017-01-01
Understanding how Listeria monocytogenes , the causative agent of listeriosis, adapts to the environment is crucial. Adaptation to new matrices requires regulation of gene expression. To determine how the pathogen adapts to lagoon effluent and soil, two matrices where L. monocytogenes has been isolated, we compared the transcriptomes of L. monocytogenes CIP 110868 20 min and 24 h after its transfer to effluent and soil extract. Results showed major variations in the transcriptome of L. monocytogenes in the lagoon effluent but only minor modifications in the soil. In both the lagoon effluent and in the soil, genes involved in mobility and chemotaxis and in the transport of carbohydrates were the most frequently represented in the set of genes with higher transcript levels, and genes with phage-related functions were the most represented in the set of genes with lower transcript levels. A modification of the cell envelop was only found in the lagoon environment. Finally, the differential analysis included a large proportion of regulators, regulons, and ncRNAs.
Neng, N R; Nogueira, J M F
2012-01-01
The combination of bar adsorptive micro-extraction using activated carbon (AC) and polystyrene-divinylbenzene copolymer (PS-DVB) sorbent phases, followed by liquid desorption and large-volume injection gas chromatography coupled to mass spectrometry, under selected ion monitoring mode acquisition, was developed for the first time to monitor pharmaceutical and personal care products (PPCPs) in environmental water matrices. Assays performed on 25 mL water samples spiked (100 ng L(-1)) with caffeine, gemfibrozil, triclosan, propranolol, carbamazepine and diazepam, selected as model compounds, yielded recoveries ranging from 74% to 99% under optimised experimental conditions (equilibrium time, 16 h (1,000 rpm); matrix characteristics: pH 5, 5% NaCl for AC phase; LD: methanol/acetonitrile (1:1), 45 min). The analytical performance showed good precision (RSD < 18%), convenient detection limits (5-20 ng L(-1)) and excellent linear dynamic range (20-800 ng L(-1)) with remarkable determination coefficients (r(2) > 0.99), where the PS-DVB sorbent phase showed a much better efficiency. By using the standard addition methodology, the application of the present analytical approach on tap, ground, sea, estuary and wastewater samples allowed very good performance at the trace level. The proposed method proved to be a suitable sorption-based micro-extraction alternative for the analysis of priority pollutants with medium-polar to polar characteristics, showing to be easy to implement, reliable, sensitive and requiring a low sample volume to monitor PPCPs in water matrices.
Carbon nanotubes (CNTs) have been incorporated into numerous consumer products, and have also been employed in various industrial areas because of their extraordinary properties. The large scale production and wide applications of CNTs make their release into the environment a ma...
NASA Astrophysics Data System (ADS)
Li, Qiang; Zhang, Ying; Lin, Jingran; Wu, Sissi Xiaoxiao
2017-09-01
Consider a full-duplex (FD) bidirectional secure communication system, where two communication nodes, named Alice and Bob, simultaneously transmit and receive confidential information from each other, and an eavesdropper, named Eve, overhears the transmissions. Our goal is to maximize the sum secrecy rate (SSR) of the bidirectional transmissions by optimizing the transmit covariance matrices at Alice and Bob. To tackle this SSR maximization (SSRM) problem, we develop an alternating difference-of-concave (ADC) programming approach to alternately optimize the transmit covariance matrices at Alice and Bob. We show that the ADC iteration has a semi-closed-form beamforming solution, and is guaranteed to converge to a stationary solution of the SSRM problem. Besides the SSRM design, this paper also deals with a robust SSRM transmit design under a moment-based random channel state information (CSI) model, where only some roughly estimated first and second-order statistics of Eve's CSI are available, but the exact distribution or other high-order statistics is not known. This moment-based error model is new and different from the widely used bounded-sphere error model and the Gaussian random error model. Under the consider CSI error model, the robust SSRM is formulated as an outage probability-constrained SSRM problem. By leveraging the Lagrangian duality theory and DC programming, a tractable safe solution to the robust SSRM problem is derived. The effectiveness and the robustness of the proposed designs are demonstrated through simulations.
Constructing acoustic timefronts using random matrix theory.
Hegewisch, Katherine C; Tomsovic, Steven
2013-10-01
In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Hestand, Matthew S; van Galen, Michiel; Villerius, Michel P; van Ommen, Gert-Jan B; den Dunnen, Johan T; 't Hoen, Peter AC
2008-01-01
Background The identification of transcription factor binding sites is difficult since they are only a small number of nucleotides in size, resulting in large numbers of false positives and false negatives in current approaches. Computational methods to reduce false positives are to look for over-representation of transcription factor binding sites in a set of similarly regulated promoters or to look for conservation in orthologous promoter alignments. Results We have developed a novel tool, "CORE_TF" (Conserved and Over-REpresented Transcription Factor binding sites) that identifies common transcription factor binding sites in promoters of co-regulated genes. To improve upon existing binding site predictions, the tool searches for position weight matrices from the TRANSFACR database that are over-represented in an experimental set compared to a random set of promoters and identifies cross-species conservation of the predicted transcription factor binding sites. The algorithm has been evaluated with expression and chromatin-immunoprecipitation on microarray data. We also implement and demonstrate the importance of matching the random set of promoters to the experimental promoters by GC content, which is a unique feature of our tool. Conclusion The program CORE_TF is accessible in a user friendly web interface at . It provides a table of over-represented transcription factor binding sites in the users input genes' promoters and a graphical view of evolutionary conserved transcription factor binding sites. In our test data sets it successfully predicts target transcription factors and their binding sites. PMID:19036135
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
Linear discriminant analysis with misallocation in training samples
NASA Technical Reports Server (NTRS)
Chhikara, R. (Principal Investigator); Mckeon, J.
1982-01-01
Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.
Building Enterprise Transition Plans Through the Development of Collapsing Design Structure Matrices
2015-09-17
processes from the earliest input to the final output to evaluate where change is needed to reduce costs, reduce waste, and improve the flow of information...from) integrating a large complex enterprise? • How should firms/enterprises evaluate systems prior to integration? What are some valid taxonomies
Digital Maps, Matrices and Computer Algebra
ERIC Educational Resources Information Center
Knight, D. G.
2005-01-01
The way in which computer algebra systems, such as Maple, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in…
Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Boker, Steven M.
2010-01-01
Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…
A new look at the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1994-01-01
The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.
NASA Technical Reports Server (NTRS)
Brown, D. C.
1971-01-01
The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.
Neilson, Andrew P; George, Judy C; Janle, Elsa M; Mattes, Richard D; Rudolph, Ralf; Matusheski, Nathan V; Ferruzzi, Mario G
2009-10-28
Conflicting data exist regarding the influence of chocolate matrices on the bioavailability of epicatechin (EC) from cocoa. The objective of this study was to assess the bioavailability of EC from matrices varying in macronutrient composition and physical form. EC bioavailability was assessed from chocolate confections [reference dark chocolate (CDK), high sucrose (CHS), high milk protein (CMP)] and cocoa beverages [sucrose milk protein (BSMP), non-nutritive sweetener milk protein (BNMP)], in humans and in vitro. Six subjects consumed each product in a randomized crossover design, with serum EC concentrations monitored over 6 h post consumption. Areas under the serum concentration-time curve (AUC) were similar among chocolate matrices. However, AUCs were significantly increased for BSMP and BNMP (132 and 143 nM h) versus CMP (101 nM h). Peak serum concentrations (C(MAX)) were also increased for BSMP and BNMP (43 and 42 nM) compared to CDK and CMP (32 and 25 nM). Mean T(MAX) values were lower, although not statistically different, for beverages (0.9-1.1 h) versus confections (1.8-2.3 h), reflecting distinct shapes of the pharmacokinetic curves for beverages and confections. In vitro bioaccessibility and Caco-2 accumulation did not differ between treatments. These data suggest that bioavailability of cocoa flavan-3-ols is likely similar from typical commercial cocoa based foods and beverages, but that the physical form and sucrose content may influence T(MAX) and C(MAX).
NASA Astrophysics Data System (ADS)
Sparaciari, Carlo; Paris, Matteo G. A.
2013-01-01
We address measurement schemes where certain observables Xk are chosen at random within a set of nondegenerate isospectral observables and then measured on repeated preparations of a physical system. Each observable has a probability zk to be measured, with ∑kzk=1, and the statistics of this generalized measurement is described by a positive operator-valued measure. This kind of scheme is referred to as quantum roulettes, since each observable Xk is chosen at random, e.g., according to the fluctuating value of an external parameter. Here we focus on quantum roulettes for qubits involving the measurements of Pauli matrices, and we explicitly evaluate their canonical Naimark extensions, i.e., their implementation as indirect measurements involving an interaction scheme with a probe system. We thus provide a concrete model to realize the roulette without destroying the signal state, which can be measured again after the measurement or can be transmitted. Finally, we apply our results to the description of Stern-Gerlach-like experiments on a two-level system.
NASA Astrophysics Data System (ADS)
Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.
2018-02-01
We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.
Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor
2011-09-01
Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.
Regulatory sequence analysis tools.
van Helden, Jacques
2003-07-01
The web resource Regulatory Sequence Analysis Tools (RSAT) (http://rsat.ulb.ac.be/rsat) offers a collection of software tools dedicated to the prediction of regulatory sites in non-coding DNA sequences. These tools include sequence retrieval, pattern discovery, pattern matching, genome-scale pattern matching, feature-map drawing, random sequence generation and other utilities. Alternative formats are supported for the representation of regulatory motifs (strings or position-specific scoring matrices) and several algorithms are proposed for pattern discovery. RSAT currently holds >100 fully sequenced genomes and these data are regularly updated from GenBank.
1999-01-01
distances and identities and Roger?’ genetic distances were clustered by the unweighted pair group method using arithmetic average ( UPGMA ) to produce...Seattle, WA) using the NEIGHBOR program with the UPGMA option and a phenogram was produced with DRAWGRAM, also in PHYLIP 3.X RAPDBOOT5’ was used to...generate 100 pseudoreplicate distance matrices, which were collapsed to form 100 trees with UPGMA . The bootstrap consensus tree was derived from the 100
Chakrabarti, C G; Ghosh, Koyel
2013-10-01
In the present paper we have first introduced a measure of dynamical entropy of an ecosystem on the basis of the dynamical model of the system. The dynamical entropy which depends on the eigenvalues of the community matrix of the system leads to a consistent measure of complexity of the ecosystem to characterize the dynamical behaviours such as the stability, instability and periodicity around the stationary states of the system. We have illustrated the theory with some model ecosystems. Copyright © 2013 Elsevier Inc. All rights reserved.
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Julaiti, Alafate; Wu, Bin; Zhang, Zhongzhi
2013-05-01
The eigenvalues of the normalized Laplacian matrix of a network play an important role in its structural and dynamical aspects associated with the network. In this paper, we study the spectra and their applications of normalized Laplacian matrices of a family of fractal trees and dendrimers modeled by Cayley trees, both of which are built in an iterative way. For the fractal trees, we apply the spectral decimation approach to determine analytically all the eigenvalues and their corresponding multiplicities, with the eigenvalues provided by a recursive relation governing the eigenvalues of networks at two successive generations. For Cayley trees, we show that all their eigenvalues can be obtained by computing the roots of several small-degree polynomials defined recursively. By using the relation between normalized Laplacian spectra and eigentime identity, we derive the explicit solution to the eigentime identity for random walks on the two treelike networks, the leading scalings of which follow quite different behaviors. In addition, we corroborate the obtained eigenvalues and their degeneracies through the link between them and the number of spanning trees.
NASA Astrophysics Data System (ADS)
Carstea, E.; Baker, A.; Johnson, R.; Reynolds, D. M.
2009-12-01
In-line fluorescence EEM monitoring has been performed over an eleven-day period for Bournbrook River, Birmingham, UK. River water was diverted to a portable laboratory via a continuous flow pump and filter system. Fluorescence excitation-emission matrices data was recorded every 3 minutes using a flow cell (1cm pathlength) coupled to a fiber optic probe. This real-time fluorescence EEM data (Excitation, 225-400 nm at 5 nm steps, emission, 280-500 nm at 2 nm steps) was collected 'in-line'and directly compared with the spectrophotometric properties and physical and chemical parameters of river water samples collected off-line at known time intervals. Over the monitoring period, minor pollution pulses from cross connections were detected and identified hourly along with a random diesel pollution event. This work addresses the practicalities of measuring and detecting fluorescence EEM in the field and discusses the potential of this technological approach for further understanding important hydrological and biogeochemical processes. Problems associated with fouling and system failure are also reported. Example of the data generated from the continuous fluorescence EEM monitoring.
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Danko, Martin; Hrdlovič, Pavol; Kulhánek, Jiří; Bureš, Filip
2011-07-01
Spectral properties of novel type of fluorophores consist of a π-conjugated system end-capped with an electron-donating N,N-dimethylaminophenyl group and an electron-withdrawing imidazole-4,5-dicarbonitrile moiety were examined. An additional π-linker separating these two structural units comprises simple bond (B1P), phenyl (B2B), styryl (B3S) and ethynylphenyl (B4A) moieties. The absorption and fluorescence spectra were taken in cyclohexane, chloroform, acetonitrile, methanol and in polymer matrices such as polystyrene, poly(methyl methacrylate) and poly(vinylchloride). The longest-wavelength absorption band was observed in the range of 300 to 400 nm. Intense fluorescence with quantum yields of 0.2 to 1.0 was observed in cyclohexane, chloroform and in polymer matrices within the range of 380 to 500 nm. The fluorescence was strongly quenched in neat acetonitrile and methanol. The fluorescence lifetimes are in the range of 1-4 ns for all measured fluorophores. The large Stokes shift (4,000 to 8,000 cm(-1)) indicates a large difference in the spatial arrangement of the chromophore in the absorbing and the emitting states. The observed fluorescence of all fluorophores in chloroform was quenched by 1-oxo-2,2,6,6-tetramethyl-4-hydroxy piperidine by the diffusion-controlled bimolecular rate (cca 2 × 10(10) L mol(-1) s(-1)). Polar solvents such as acetonitrile and methanol quenched the fluorescence as well but probably via a different mechanism. © Springer Science+Business Media, LLC 2011
In Vitro, Matrix-Free Formation Of Solid Tumor Spheroids
NASA Technical Reports Server (NTRS)
Gonda, Steve R.; Marley, Garry M.
1993-01-01
Cinostatic bioreactor promotes formation of relatively large solid tumor spheroids exhibiting diameters from 750 to 2,100 micrometers. Process useful in studying efficacy of chemotherapeutic agents and of interactions between cells not constrained by solid matrices. Two versions have been demonstrated; one for anchorage-independent cells and one for anchorage-dependent cells.
ERIC Educational Resources Information Center
Page, Ellis B.; Jarjoura, David
1979-01-01
A computer scan of ACT Assessment records identified 3,427 sets of twins. The Hardy-Weinberg rule was used to estimate the proportion of monozygotic twins in the sample. Matrices of genetic and environmental influences were produced. The heaviest loadings were clearly in the genetic matrix. (SJL)
Sparse matrix methods based on orthogonality and conjugacy
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1973-01-01
A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.
The two predominant pathways to arsenic exposure are drinking water and dietary ingestion. A large percentage of the dietary exposure component is associated with a few food groups. For example, seafood alone represents over 50% of the total dietary exposure. From a daily dose...
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
Rhodes, Eric R; Huff, Emma M; Hamilton, Douglas W; Jones, Jenifer L
2016-02-01
The collection of waterborne pathogen occurrence data often requires the concentration of microbes from large volumes of water due to the low number of microorganisms that are typically present in environmental and drinking waters. Hollow-fiber ultrafiltration (HFUF) has shown promise in the recovery of various microorganisms. This study has demonstrated that the HFUF primary concentration method is effective at recovering bacteriophage φX174, poliovirus, enterovirus 70, echovirus 7, coxsackievirus B4 and adenovirus 41 from large volumes of tap and river water with an average recovery of all viruses of 73.4% and 81.0%, respectively. This study also evaluated an effective secondary concentration method using celite for the recovery of bacteriophage and enteric viruses tested from HFUF concentrates of both matrices. Overall, the complete concentration method (HFUF primary concentration plus celite secondary concentration) resulted in a concentration factor of 3333 and average recoveries for all viruses from tap and river waters of 60.6% and 60.0%, respectively. Published by Elsevier B.V.
Picanço, A P; Vallero, M V; Gianotti, E P; Zaiat, M; Blundi, C E
2001-01-01
This paper reports on the influence of the material porosity on the anaerobic biomass adhesion on four different inert matrices: polyurethane foam, PVC, refractory brick and special ceramic. The biofilm development was performed in a fixed-bed anaerobic reactor containing all the support materials and fed with a synthetic wastewater containing protein, lipids and carbohydrates. The data obtained from microscopic analysis and kinetic assays indicated that the material porosity has a crucial importance in the retention of the anaerobic biomass. The polyurethane foam particles and the special ceramic were found to present better retentive properties than the PVC and the refractory brick. The large specific surface area, directly related to material porosity, is fundamental to provide a large amount of attached biomass. However, different supports can provide specific conditions for the adherence of distinct microorganism types. The microbiological exams revealed a distinction in the support colonization. A predominance of methanogenic archaeas resembling Methanosaeta was observed both in the refractory brick and the special ceramic. Methanosarcina-like microorganisms were predominant in the PVC and the polyurethane foam matrices.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
Mechanical critical phenomena and the elastic response of fiber networks
NASA Astrophysics Data System (ADS)
Mackintosh, Fred
The mechanics of cells and tissues are largely governed by scaffolds of filamentous proteins that make up the cytoskeleton, as well as extracellular matrices. Evidence is emerging that such networks can exhibit rich mechanical phase behavior. A classic example of a mechanical phase transition was identified by Maxwell for macroscopic engineering structures: networks of struts or springs exhibit a continuous, second-order phase transition at the isostatic point, where the number of constraints imposed by connectivity just equals the number of mechanical degrees of freedom. We present recent theoretical predictions and experimental evidence for mechanical phase transitions in in both synthetic and biopolymer networks. We show, in particular, excellent quantitative agreement between the mechanics of collagen matrices and the predictions of a strain-controlled phase transition in sub-isostatic networks.
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Perez, Danny
2018-05-01
A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
A performance study of sparse Cholesky factorization on INTEL iPSC/860
NASA Technical Reports Server (NTRS)
Zubair, M.; Ghose, M.
1992-01-01
The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.
Kellie, John F; Kehler, Jonathan R; Karlinsey, Molly Z; Summerfield, Scott G
2017-12-01
Typically, quantitation of biotherapeutics from biological matrices by LC-MS is based on a surrogate peptide approach to determine molecule concentration. Recent efforts have focused on quantitation of the intact protein molecules or larger mass subunits of monoclonal antibodies. To date, there has been limited guidance for large or intact protein mass quantitation for quantitative bioanalysis. Intact- and subunit-level analyses of biotherapeutics from biological matrices are performed at 12-25 kDa mass range with quantitation data presented. Linearity, bias and other metrics are presented along with recommendations made on the viability of existing quantitation approaches. This communication is intended to start a discussion around intact protein data analysis and processing, recognizing that other published contributions will be required.
GPU-accelerated element-free reverse-time migration with Gauss points partition
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong
2018-06-01
An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.
Sun, Mingyun; Lin, Jennifer S.
2012-01-01
Double-stranded (ds) DNA fragments over a wide size range were successfully separated in blended polymer matrices by microfluidic chip electrophoresis. Novel blended polymer matrices composed of two types of polymers with three different molar masses were developed to provide improved separations of large dsDNA without negatively impacting the separation of small dsDNA. Hydroxyethyl celluloses (HECs) with average molar masses of ~27 kDa and ~1 MDa were blended with a second class of polymer, high-molar mass (~7 MDa) linear polyacrylamide (LPA). Fast and highly efficient separations of commercially available DNA ladders were achieved on a borosilicate glass microchip. A distinct separation of a 1 Kb DNA extension ladder (200 bp to 40,000 bp) was completed in 2 minutes. An orthogonal Design of Experiments (DOE) was used to optimize experimental parameters for DNA separations over a wide size range. We find that the two dominant factors are the applied electric field strength and the inclusion of a high concentration of low-molar mass polymer in the matrix solution. These two factors exerted different effects on the separations of small dsDNA fragments below 1 kbp, medium dsDNA fragments between 1 kbp and 10 kbp, and large dsDNA fragments above 10 kbp. PMID:22009451
Spectra of random networks in the weak clustering regime
NASA Astrophysics Data System (ADS)
Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.
2018-03-01
The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.
NASA Astrophysics Data System (ADS)
Yamamoto, Takuya; Nishigaki, Shinsuke M.
2018-02-01
We compute individual distributions of low-lying eigenvalues of a chiral random matrix ensemble interpolating symplectic and unitary symmetry classes by the Nyström-type method of evaluating the Fredholm Pfaffian and resolvents of the quaternion kernel. The one-parameter family of these distributions is shown to fit excellently the Dirac spectra of SU(2) lattice gauge theory with a constant U(1) background or dynamically fluctuating U(1) gauge field, which weakly breaks the pseudoreality of the unperturbed SU(2) Dirac operator. The observed linear dependence of the crossover parameter with the strength of the U(1) perturbations leads to precise determination of the pseudo-scalar decay constant, as well as the chiral condensate in the effective chiral Lagrangian of the AI class.
Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian
2017-04-11
A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.
A Brief Historical Introduction to Matrices and Their Applications
ERIC Educational Resources Information Center
Debnath, L.
2014-01-01
This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…
Laminin active peptide/agarose matrices as multifunctional biomaterials for tissue engineering.
Yamada, Yuji; Hozumi, Kentaro; Aso, Akihiro; Hotta, Atsushi; Toma, Kazunori; Katagiri, Fumihiko; Kikkawa, Yamato; Nomizu, Motoyoshi
2012-06-01
Cell adhesive peptides derived from extracellular matrix components are potential candidates to afford bio-adhesiveness to cell culture scaffolds for tissue engineering. Previously, we covalently conjugated bioactive laminin peptides to polysaccharides, such as chitosan and alginate, and demonstrated their advantages as biomaterials. Here, we prepared functional polysaccharide matrices by mixing laminin active peptides and agarose gel. Several laminin peptide/agarose matrices showed cell attachment activity. In particular, peptide AG73 (RKRLQVQLSIRT)/agarose matrices promoted strong cell attachment and the cell behavior depended on the stiffness of agarose matrices. Fibroblasts formed spheroid structures on the soft AG73/agarose matrices while the cells formed a monolayer with elongated morphologies on the stiff matrices. On the stiff AG73/agarose matrices, neuronal cells extended neuritic processes and endothelial cells formed capillary-like networks. In addition, salivary gland cells formed acini-like structures on the soft matrices. These results suggest that the peptide/agarose matrices are useful for both two- and three-dimensional cell culture systems as a multifunctional biomaterial for tissue engineering. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices.
Brusco, Michael; Steinley, Douglas
2011-10-01
Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.
Lu, Shuangzan; Huang, Min; Qin, Zhihui; Yu, Yinghui; Guo, Qinmin; Cao, Gengyu
2018-08-03
Molecular rotors, motors and gears play important roles in artificial molecular machines, in which rotor and motor matrices are highly desirable for large-scale bottom-up fabrication of molecular machines. Here we demonstrate the fabrication of a highly ordered molecular rotor matrix by depositing nonplanar dipolar titanyl phthalocyanine (TiOPc, C 32 H 16 N 8 OTi) molecules on a Moiré patterned dipolar FeO/Pt(111) substrate. TiOPc molecules with O atoms pointing outwards from the substrate (upward) or towards the substrate (downward) are alternatively adsorbed on the fcc sites by strong lateral confinement. The adsorbed molecules, i.e. two kinds of molecular rotors, show different scanning tunneling microscopy images, thermal stabilities and rotational characteristics. Density functional theory calculations clarify that TiOPc molecules anchoring upwards with high adsorption energies correspond to low-rotational-rate rotors, while those anchoring downwards with low adsorption energies correspond to high-rotational-rate rotors. A robust rotor matrix fully occupied by low-rate rotors is fabricated by depositing molecules on the substrate at elevated temperature. Such a paradigm opens up a promising route to fabricate functional molecular rotor matrices, driven motor matrices and even gear groups on solid substrates.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Application of Semi-Definite Programming for Many-Fermion Systems
NASA Astrophysics Data System (ADS)
Zhao, Zhengji; Braams, Bastiaan; Fukuda, Mituhiro; Overton, Michael
2003-03-01
The ground state energy and other important observables of a many-fermion system with one- and two-body interactions only can all be obtained from the first order and second order Reduced Density Matrices (RDM's) of the system. Using these density matrices and a family of associated representability conditions one may obtain an approximation method for electronic structure theory that is in the mathematical form of Semi-Definite Programming (SDP): minimize a linear matrix functional over a space of positive semidefinite matrices subject to linear constraints. The representability conditions are some known necessary conditions, starting with the well-known P, Q, and G conditions [Claude Garrod and Jerome K. Percus, Reducation of the N-Particle Variational Problem, J. Math. Phys. 5 (1964) 1756-1776]. The RDM method with SDP has great potential advantages over the wave function method when the particle number N is large. The dimension of the full configuration space increases exponentially with N, but in RDM method with SDP the dimension of the objective matrix (which includes RDM's) increases only polynomially with N. We will report on the effect of adding the generalized three-index conditions proposed in [R. M. Erdahl, Representability, Int. J. Quantum Chem. 13 (1978) 697-718].
Effect of matrix composition and process conditions on casein-gelatin beads floating properties.
Bulgarelli, E; Forni, F; Bernabei, M T
2000-04-05
Casein-gelatin beads have been prepared by emulsification extraction method and cross-linked with D,L-glyceraldehyde in an acetone-water mixture 3:1 (v/v). Casein emulsifying properties cause air bubble incorporation and the formation of large holes in the beads. The high porosity of the matrix influences the bead properties such as drug loading, drug release and floatation. These effects have been stressed by comparison with low porous beads, artificially prepared without cavities. The percentage of casein in the matrix increases the drug loading of both low and high porous matrices, although the loading of high porous matrices is lower than that of low porous matrices. As a matter of fact, the drug should be more easily removed during washing and recovery because of the higher superficial pore area of the beads. This can explain the drug release rate increase, observed in high porous matrix, in comparison with beads without cavities. This is due to the rapid diffusion of the drug through water filled pores. The study shows that cavities act as an air reservoir and enable beads to float. Therefore, casein seems to be a material suitable to the inexpensive formation of an air reservoir for floating systems.
Instanton approach to large N Harish-Chandra-Itzykson-Zuber integrals.
Bun, J; Bouchaud, J P; Majumdar, S N; Potters, M
2014-08-15
We reconsider the large N asymptotics of Harish-Chandra-Itzykson-Zuber integrals. We provide, using Dyson's Brownian motion and the method of instantons, an alternative, transparent derivation of the Matytsin formalism for the unitary case. Our method is easily generalized to the orthogonal and symplectic ensembles. We obtain an explicit solution of Matytsin's equations in the case of Wigner matrices, as well as a general expansion method in the dilute limit, when the spectrum of eigenvalues spreads over very wide regions.
Sequences Of Amino Acids For Human Serum Albumin
NASA Technical Reports Server (NTRS)
Carter, Daniel C.
1992-01-01
Sequences of amino acids defined for use in making polypeptides one-third to one-sixth as large as parent human serum albumin molecule. Smaller, chemically stable peptides have diverse applications including service as artificial human serum and as active components of biosensors and chromatographic matrices. In applications involving production of artificial sera from new sequences, little or no concern about viral contaminants. Smaller genetically engineered polypeptides more easily expressed and produced in large quantities, making commercial isolation and production more feasible and profitable.
Entropy production rate as a criterion for inconsistency in decision theory
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-05-01
Individual and group decisions are complex, often involving choosing an apt alternative from a multitude of options. Evaluating pairwise comparisons breaks down such complex decision problems into tractable ones. Pairwise comparison matrices (PCMs) are regularly used to solve multiple-criteria decision-making problems, for example, using Saaty’s analytic hierarchy process (AHP) framework. However, there are two significant drawbacks of using PCMs. First, humans evaluate PCMs in an inconsistent manner. Second, not all entries of a large PCM can be reliably filled by human decision makers. We address these two issues by first establishing a novel connection between PCMs and time-irreversible Markov processes. Specifically, we show that every PCM induces a family of dissipative maximum path entropy random walks (MERW) over the set of alternatives. We show that only ‘consistent’ PCMs correspond to detailed balanced MERWs. We identify the non-equilibrium entropy production in the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably, the entropy production satisfies all of the recently laid out criteria for reasonable consistency indices. We also propose an approach to use incompletely filled PCMs in AHP. Potential future avenues are discussed as well.
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
NASA Astrophysics Data System (ADS)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.
Miscellaneous methods for measuring matric or water potential
Scanlon, Bridget R.; Andraski, Brian J.; Bilskie, Jim; Dane, Jacob H.; Topp, G. Clarke
2002-01-01
A variety of techniques to measure matric potential or water potential in the laboratory and in the field are described in this section. The techniques described herein require equilibration of some medium whose matric or water potential can be determined from previous calibration or can be measured directly. Under equilibrium conditions the matric or water potential of the medium is equal to that of the soil. The techniques can be divided into: (i) those that measure matric potential and (ii) those that measure water potential (sum of matric and osmotic potentials). Matric potential is determined when the sensor matrix is in direct contact with the soil, so salts are free to diffuse in or out of the sensor matrix, and the equilibrium measurement therefore reflects matric forces acting on the water. Water potential is determined when the sensor is separated from the soil by a vapor gap, so salts are not free to move in or out of the sensor, and the equilibrium measurement reflects the sum of the matric and osmotic forces acting on the water.Seven different techniques are described in this section. Those that measure matric potential include (i) heat dissipation sensors, (ii) electrical resistance sensors, (iii) frequency domain and time domain sensors, and (iv) electro-optical switches. A method that can be used to measure matric potential or water potential is the (v) filter paper method. Techniques that measure water potential include (vi) the Dew Point Potentiameter (Decagon Devices, Inc., Pullman, WA1) (water activity meter) and (vii) vapor equilibration.The first four techniques are electronically based methods for measuring matric potential. Heat dissipation sensors and electrical resistance sensors infer matric potential from previously determined calibration relations between sensor heat dissipation or electrical resistance and matric potential. Frequency-domain and timedomain matric potential sensors measure water content, which is related to matric potential of the sensor through calibration. Electro-optical switches measure changes in light transmission through thin, nylon filters as they absorb or desorb water in response to changes in matric potential. Heat dissipation sensors and electrical resistance sensors are used primarily in the field to provide information on matric potential. Frequency domain matric potential sensors are new and have not been widely used. Time domain matric potential sensors and electro-optical switches are new and have not been commercialized. For the fifth technique, filter paper is used as the standard matrix. The filter paper technique measures matric potential when the filter paper is in direct contact with soil or water potential when separated from soil by a vapor gap. The Dew Point Potentiameter calculates water potential from the measured dew point and sample temperature. The vapor equilibration technique involves equilibration of soil samples with salt solutions of known osmotic potential. The filter paper, Dew Point Potentiameter, and vapor equilibration techniques are generally used in the laboratory to measure water potential of disturbed field samples or to measure water potential for water retention functions.
Neutrino mass priors for cosmology from random matrices
NASA Astrophysics Data System (ADS)
Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott
2018-02-01
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Random matrix theory filters in portfolio optimisation: A stability and risk assessment
NASA Astrophysics Data System (ADS)
Daly, J.; Crane, M.; Ruskin, H. J.
2008-07-01
Random matrix theory (RMT) filters, applied to covariance matrices of financial returns, have recently been shown to offer improvements to the optimisation of stock portfolios. This paper studies the effect of three RMT filters on the realised portfolio risk, and on the stability of the filtered covariance matrix, using bootstrap analysis and out-of-sample testing. We propose an extension to an existing RMT filter, (based on Krzanowski stability), which is observed to reduce risk and increase stability, when compared to other RMT filters tested. We also study a scheme for filtering the covariance matrix directly, as opposed to the standard method of filtering correlation, where the latter is found to lower the realised risk, on average, by up to 6.7%. We consider both equally and exponentially weighted covariance matrices in our analysis, and observe that the overall best method out-of-sample was that of the exponentially weighted covariance, with our Krzanowski stability-based filter applied to the correlation matrix. We also find that the optimal out-of-sample decay factors, for both filtered and unfiltered forecasts, were higher than those suggested by Riskmetrics [J.P. Morgan, Reuters, Riskmetrics technical document, Technical Report, 1996. http://www.riskmetrics.com/techdoc.html], with those for the latter approaching a value of α=1. In conclusion, RMT filtering reduced the realised risk, on average, and in the majority of cases when tested out-of-sample, but increased the realised risk on a marked number of individual days-in some cases more than doubling it.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561
Very High Frequency Epr: Instrument and Applications
NASA Astrophysics Data System (ADS)
Wang, Wei
Most Electron Paramagnetic Resonance (EPR, also known as ESR or EMR) experiments are performed at conventional 9 GHz or 35 GHz frequency. But there are numerous situations in which a large increase in the microwave frequency (and/or magnetic field) will result in substantial increase in the information content in EPR spectra. This has motivated us to construct a very high frequency (VHF, 95 GHz) EPR spectrometer at Illinois EPR Research Center. Many advantages of VHF EPR are demonstrated through examples in Chapter 1. The spectrometer and some unique aspects of the instrument are described and documented in Chapter 2. Chapter 3 reports use of the VHF EPR technique to study the structure/spectral relationship of a homologous series of thiophenes, which may be constituents of coal. Two successful methods to generate the cation radicals of these organic sulfur compounds are found. The g matrices (tensors) of the thiophenic radicals are obtained for the first time. The small differences between anisotropic components of the g matrices can be unambiguously resolved. Correlations of the experimentally measured g matrices with the molecular and electronic structures are reported. The g shifts correlate linearly with lambda of their Huckel molecular orbitals; the largest g components are proportional to the pi spin densities on sulfur. In addition, the small proton hyperfine interactions of dibenzothiophene (DBT) are observed for the first time by continuous wave VHF EPR. A multifrequency approach, including auxiliary 2-4 GHz pulsed measurement, has shown that a single set of spin Hamiltonian parameters describes the spin system of DBT over a microwave frequency span of 3 to 95 GHz. These newly available, detailed, and accurate data provide a valuable opportunity to test, and perhaps to improve, the existing theoretical models for predictions on g matrices of organic radicals. Finally, Chapter 4 reports trial calculations of g matrices by several molecular orbital methods.
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
Shin, Yong Cheol; Lee, Jong Ho; Jin, Linhua; Kim, Min Jeong; Oh, Jin-Woo; Kim, Tai Wan; Han, Dong-Wook
2014-01-01
M13 bacteriophages can be readily fabricated as nanofibers due to non-toxic bacterial virus with a nanofiber-like shape. In the present study, we prepared hybrid nanofiber matrices composed of poly(lactic-co-glycolic acid, PLGA) and M13 bacteriophages which were genetically modified to display the RGD peptide on their surface (RGD-M13 phage). The surface morphology and chemical composition of hybrid nanofiber matrices were characterized by scanning electron microscopy (SEM) and Raman spectroscopy, respectively. Immunofluorescence staining was conducted to investigate the existence of M13 bacteriophages in RGD-M13 phage/PLGA hybrid nanofibers. In addition, the attachment and proliferation of three different types of fibroblasts on RGD-M13 phage/PLGA nanofiber matrices were evaluated to explore how fibroblasts interact with these matrices. SEM images showed that RGD-M13 phage/PLGA hybrid matrices had the non-woven porous structure, quite similar to that of natural extracellular matrices, having an average fiber diameter of about 190 nm. Immunofluorescence images and Raman spectra revealed that RGD-M13 phages were homogeneously distributed in entire matrices. Moreover, the attachment and proliferation of fibroblasts cultured on RGD-M13 phage/PLGA matrices were significantly enhanced due to enriched RGD moieties on hybrid matrices. These results suggest that RGD-M13 phage/PLGA matrices can be efficiently used as biomimetic scaffolds for tissue engineering applications.
Speciation of nanoscale objects by nanoparticle imprinted matrices
NASA Astrophysics Data System (ADS)
Hitrik, Maria; Pisman, Yamit; Wittstock, Gunther; Mandler, Daniel
2016-07-01
The toxicity of nanoparticles is not only a function of the constituting material but depends largely on their size, shape and stabilizing shell. Hence, the speciation of nanoscale objects, namely, their detection and separation based on the different species, similarly to heavy metals, is of outmost importance. Here we demonstrate the speciation of gold nanoparticles (AuNPs) and their electrochemical detection using the concept of ``nanoparticles imprinted matrices'' (NAIM). Negatively charged AuNPs are adsorbed as templates on a conducting surface previously modified with polyethylenimine (PEI). The selective matrix is formed by the adsorption of either oleic acid (OA) or poly(acrylic acid) (PAA) on the non-occupied areas. The AuNPs are removed by electrooxidation to form complementary voids. These voids are able to recognize the AuNPs selectively based on their size. Furthermore, the selectivity could be improved by adsorbing an additional layer of 1-hexadecylamine, which deepened the voids. Interestingly, silver nanoparticles (AgNPs) were also recognized if their size matched those of the template AuNPs. The steps in assembling the NAIMs and the reuptake of the nanoparticles were characterized carefully. The prospects for the analytical use of NAIMs, which are simple, of small dimension, cost-efficient and portable, are in the sensing and separation of nanoobjects.The toxicity of nanoparticles is not only a function of the constituting material but depends largely on their size, shape and stabilizing shell. Hence, the speciation of nanoscale objects, namely, their detection and separation based on the different species, similarly to heavy metals, is of outmost importance. Here we demonstrate the speciation of gold nanoparticles (AuNPs) and their electrochemical detection using the concept of ``nanoparticles imprinted matrices'' (NAIM). Negatively charged AuNPs are adsorbed as templates on a conducting surface previously modified with polyethylenimine (PEI). The selective matrix is formed by the adsorption of either oleic acid (OA) or poly(acrylic acid) (PAA) on the non-occupied areas. The AuNPs are removed by electrooxidation to form complementary voids. These voids are able to recognize the AuNPs selectively based on their size. Furthermore, the selectivity could be improved by adsorbing an additional layer of 1-hexadecylamine, which deepened the voids. Interestingly, silver nanoparticles (AgNPs) were also recognized if their size matched those of the template AuNPs. The steps in assembling the NAIMs and the reuptake of the nanoparticles were characterized carefully. The prospects for the analytical use of NAIMs, which are simple, of small dimension, cost-efficient and portable, are in the sensing and separation of nanoobjects. Electronic supplementary information (ESI) available: S1 - instrumentation, S2 - immobilization of AuNPs, S3 - time dependent immobilization, S4 - CVs at matrix-coated substrates, S5 - CVs at AuNP-loaded matrices, S6 - peak potentials for the oxidation of AuNPs of different sizes, S7 - schematics for the change of conductive area of the matrices, S8 - probe CVs before and after AuNPs oxidation, S9 - calculation of adsorbed and reuptaken AuNPs, S10 - CVs of AuNPs adsorbed on non-imprinted matrices, S11 - SEM images of AuNPs adsorbed on non-imprinted matrices, S12 - SEM images after reuptake of AuNPs, S13 - schematic of the effect of thickening the matrix. See DOI: 10.1039/c6nr01106c
ERIC Educational Resources Information Center
Flores-Mendoza, Carmen; Widaman, Keith F.; Rindermann, Heiner; Primi, Ricardo; Mansur-Alves, Marcela; Pena, Carla Couto
2013-01-01
Sex differences on the Attention Test (AC), the Raven's Standard Progressive Matrices (SPM), and the Brazilian Cognitive Battery (BPR5), were investigated using four large samples (total N=6780), residing in the states of Minas Gerais and Sao Paulo. The majority of samples used, which were obtained from educational settings, could be considered a…
Modeling the Impact of Alternative Immunization Strategies: Using Matrices as Memory Lanes
Alonso, Wladimir J.; Rabaa, Maia A.; Giglio, Ricardo; Miller, Mark A.; Schuck-Paim, Cynthia
2015-01-01
Existing modeling approaches are divided between a focus on the constitutive (micro) elements of systems or on higher (macro) organization levels. Micro-level models enable consideration of individual histories and interactions, but can be unstable and subject to cumulative errors. Macro-level models focus on average population properties, but may hide relevant heterogeneity at the micro-scale. We present a framework that integrates both approaches through the use of temporally structured matrices that can take large numbers of variables into account. Matrices are composed of several bidimensional (time×age) grids, each representing a state (e.g. physiological, immunological, socio-demographic). Time and age are primary indices linking grids. These matrices preserve the entire history of all population strata and enable the use of historical events, parameters and states dynamically in the modeling process. This framework is applicable across fields, but particularly suitable to simulate the impact of alternative immunization policies. We demonstrate the framework by examining alternative strategies to accelerate measles elimination in 15 developing countries. The model recaptured long-endorsed policies in measles control, showing that where a single routine measles-containing vaccine is employed with low coverage, any improvement in coverage is more effective than a second dose. It also identified an opportunity to save thousands of lives in India at attractively low costs through the implementation of supplementary immunization campaigns. The flexibility of the approach presented enables estimating the effectiveness of different immunization policies in highly complex contexts involving multiple and historical influences from different hierarchical levels. PMID:26509976
Espín, S; García-Fernández, A J; Herzke, D; Shore, R F; van Hattum, B; Martínez-López, E; Coeurdassier, M; Eulaers, I; Fritsch, C; Gómez-Ramírez, P; Jaspers, V L B; Krone, O; Duke, G; Helander, B; Mateo, R; Movalli, P; Sonne, C; van den Brink, N W
2016-05-01
Biomonitoring using birds of prey as sentinel species has been mooted as a way to evaluate the success of European Union directives that are designed to protect people and the environment across Europe from industrial contaminants and pesticides. No such pan-European evaluation currently exists. Coordination of such large scale monitoring would require harmonisation across multiple countries of the types of samples collected and analysed-matrices vary in the ease with which they can be collected and the information they provide. We report the first ever pan-European assessment of which raptor samples are collected across Europe and review their suitability for biomonitoring. Currently, some 182 monitoring programmes across 33 European countries collect a variety of raptor samples, and we discuss the relative merits of each for monitoring current priority and emerging compounds. Of the matrices collected, blood and liver are used most extensively for quantifying trends in recent and longer-term contaminant exposure, respectively. These matrices are potentially the most effective for pan-European biomonitoring but are not so widely and frequently collected as others. We found that failed eggs and feathers are the most widely collected samples. Because of this ubiquity, they may provide the best opportunities for widescale biomonitoring, although neither is suitable for all compounds. We advocate piloting pan-European monitoring of selected priority compounds using these matrices and developing read-across approaches to accommodate any effects that trophic pathway and species differences in accumulation may have on our ability to track environmental trends in contaminants.
Crosslinkable hydrogels derived from cartilage, meniscus, and tendon tissue.
Visser, Jetze; Levett, Peter A; te Moller, Nikae C R; Besems, Jeremy; Boere, Kristel W M; van Rijen, Mattie H P; de Grauw, Janny C; Dhert, Wouter J A; van Weeren, P René; Malda, Jos
2015-04-01
Decellularized tissues have proven to be versatile matrices for the engineering of tissues and organs. These matrices usually consist of collagens, matrix-specific proteins, and a set of largely undefined growth factors and signaling molecules. Although several decellularized tissues have found their way to clinical applications, their use in the engineering of cartilage tissue has only been explored to a limited extent. We set out to generate hydrogels from several tissue-derived matrices, as hydrogels are the current preferred cell carriers for cartilage repair. Equine cartilage, meniscus, and tendon tissue was harvested, decellularized, enzymatically digested, and functionalized with methacrylamide groups. After photo-cross-linking, these tissue digests were mechanically characterized. Next, gelatin methacrylamide (GelMA) hydrogel was functionalized with these methacrylated tissue digests. Equine chondrocytes and mesenchymal stromal cells (MSCs) (both from three donors) were encapsulated and cultured in vitro up to 6 weeks. Gene expression (COL1A1, COL2A1, ACAN, MMP-3, MMP-13, and MMP-14), cartilage-specific matrix formation, and hydrogel stiffness were analyzed after culture. The cartilage, meniscus, and tendon digests were successfully photo-cross-linked into hydrogels. The addition of the tissue-derived matrices to GelMA affected chondrogenic differentiation of MSCs, although no consequent improvement was demonstrated. For chondrocytes, the tissue-derived matrix gels performed worse compared to GelMA alone. This work demonstrates for the first time that native tissues can be processed into crosslinkable hydrogels for the engineering of tissues. Moreover, the differentiation of encapsulated cells can be influenced in these stable, decellularized matrix hydrogels.
Nonperturbative NN scattering in {sup 3}S{sub 1}–{sup 3}D{sub 1} channels of EFT(⁄π)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ji-Feng, E-mail: jfyang@phy.ecnu.edu.cn
2013-12-15
The closed-form T matrices in the {sup 3}S{sub 1}–{sup 3}D{sub 1} channels of EFT(⁄π) for NN scattering with the potentials truncated at order O(Q{sup 4}) are presented with the nonperturbative divergences parametrized in a general manner. The stringent constraints imposed by the closed form of the T matrices are exploited in the underlying theory perspective and turned into virtues in the implementation of subtractions and the manifestation of power counting rules in nonperturbative regimes, leading us to the concept of EFT scenario. A number of scenarios of the EFT description of NN scattering are compared with PSA data in termsmore » of effective range expansion and {sup 3}S{sub 1} phase shifts, showing that it is favorable to proceed in a scenario with conventional EFT couplings and sophisticated renormalization in order to have large NN scattering lengths. The informative utilities of fine tuning are demonstrated in several examples and naturally interpreted in the underlying theory perspective. In addition, some of the approaches adopted in the recent literature are also addressed in the light of EFT scenario. -- Highlights: •Closed-form unitary T matrices for NN scattering are obtained in EFT(⁄π). •Nonperturbative properties inherent in such closed-form T matrices are explored. •Nonperturbative renormalization is implemented through exploiting these properties. •Unconventional power counting of couplings is shown to be less favored by PSA data. •The ideas about nonperturbative renormalization here might have wider applications.« less
Velligan, Dawn I.; Roberts, David; Mintz, Jim; Maples, Natalie; Li, Xueying; Medellin, Elisa; Brown, Matt
2015-01-01
Introduction Among individuals with schizophrenia, those who have persistent and clinically significant negative symptoms (PNS) have the poorest functional outcomes and quality of life. The NIMH-MATRICS Consensus Statement indicated that these symptoms represent an unmet therapeutic need for large numbers of individuals with schizophrenia. No psychosocial treatment model addresses the entire constellation of PNS. Method 51 patients with PNS were randomized into one of two groups for a period of 9 months: 1) MOtiVation and Engagement (MOVE) or 2) Treatment as usual. MOVE is a home based, manual-driven, multi-modal treatment that employs a number of cognitive and behavioral principles to address the broad range of factors contributing to PNS and their functional consequences. Components of MOVE include: Environmental supports to prompt initiation and persistence, in-vivo skills training to ameliorate deficits and encourage interaction, cognitive behavioral techniques to address self-defeating attitudes, in-vivo training in emotional processing to address affective blunting and problems in identifying emotions, and specific techniques to address the deficits in anticipatory pleasure. Patients were assessed at baseline and each 3 months with multiple measures of negative symptoms. Results Repeated measures analyses of variance for mixed models indicated significant Group by Time effects for the Negative Symptom Assessment (NSA; p<.02) and the Clinical Assessment Interview for Negative Symptoms (CAINS p<.04). Group differences were not significant until 9 months of treatment and were not significant for the Brief Negative Symptom Scale (BNSS). Conclusion Further investigation of a comprehensive treatment for PNS, such as MOVE, is warranted. PMID:25937461
Velligan, Dawn I; Roberts, David; Mintz, Jim; Maples, Natalie; Li, Xueying; Medellin, Elisa; Brown, Matt
2015-07-01
Among individuals with schizophrenia, those who have persistent and clinically significant negative symptoms (PNS) have the poorest functional outcomes and quality of life. The NIMH-MATRICS Consensus Statement indicated that these symptoms represent an unmet therapeutic need for large numbers of individuals with schizophrenia. No psychosocial treatment model addresses the entire constellation of PNS. 51 patients with PNS were randomized into one of two groups for a period of 9 months: 1) MOtiVation and Enhancement (MOVE) or 2) treatment as usual. MOVE is a home based, manual-driven, multi-modal treatment that employs a number of cognitive and behavioral principles to address the broad range of factors contributing to PNS and their functional consequences. The components of MOVE include: Environmental supports to prompt initiation and persistence, in-vivo skills training to ameliorate deficits and encourage interaction, cognitive behavioral techniques to address self-defeating attitudes, in-vivo training in emotional processing to address affective blunting and problems in identifying emotions, and specific techniques to address the deficits in anticipatory pleasure. Patients were assessed at baseline and each 3 months with multiple measures of negative symptoms. Repeated measures analyses of variance for mixed models indicated significant Group by Time effects for the Negative Symptom Assessment (NSA; p<.02) and the Clinical Assessment Interview for Negative Symptoms (CAINS; p<.04). Group differences were not significant until 9 months of treatment and were not significant for the Brief Negative Symptom Scale (BNSS). Further investigation of a comprehensive treatment for PNS, such as MOVE, is warranted. Published by Elsevier B.V.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Colonization of bone matrices by cellular components
NASA Astrophysics Data System (ADS)
Shchelkunova, E. I.; Voropaeva, A. A.; Korel, A. V.; Mayer, D. A.; Podorognaya, V. T.; Kirilova, I. A.
2017-09-01
Practical surgery, traumatology, orthopedics, and oncology require bioengineered constructs suitable for replacement of large-area bone defects. Only rigid/elastic matrix containing recipient's bone cells capable of mitosis, differentiation, and synthesizing extracellular matrix that supports cell viability can comply with these requirements. Therefore, the development of the techniques to produce structural and functional substitutes, whose three-dimensional structure corresponds to the recipient's damaged tissues, is the main objective of tissue engineering. This is achieved by developing tissue-engineering constructs represented by cells placed on the matrices. Low effectiveness of carrier matrix colonization with cells and their uneven distribution is one of the major problems in cell culture on various matrixes. In vitro studies of the interactions between cells and material, as well as the development of new techniques for scaffold colonization by cellular components are required to solve this problem.
Conditioning of the Stable, Discrete-time Lyapunov Operator
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
The Schatten p-norm condition of the discrete-time Lyapunov operator L(sub A) defined on matrices P is identical with R(sup n X n) by L(sub A) P is identical with P - APA(sup T) is studied for stable matrices A is a member of R(sup n X n). Bounds are obtained for the norm of L(sub A) and its inverse that depend on the spectrum, singular values and radius of stability of A. Since the solution P of the the discrete-time algebraic Lyapunov equation (DALE) L(sub A)P = Q can be ill-conditioned only when either L(sub A) or Q is ill-conditioned, these bounds are useful in determining whether P admits a low-rank approximation, which is important in the numerical solution of the DALE for large n.
NASA Astrophysics Data System (ADS)
Galiatsatos, P. G.; Tennyson, J.
2012-11-01
The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
Fung, Eliza N; Bryan, Peter; Kozhich, Alexander
2016-04-01
LC-MS/MS has been investigated to quantify protein therapeutics in biological matrices. The protein therapeutics is digested by an enzyme to generate surrogate peptide(s) before LC-MS/MS analysis. One challenge is isolating protein therapeutics in the presence of large number of endogenous proteins in biological matrices. Immunocapture, in which a capture agent is used to preferentially bind the protein therapeutics over other proteins, is gaining traction. The protein therapeutics is eluted for digestion and LC-MS/MS analysis. One area of tremendous potential for immunocapture-LC-MS/MS is to obtain quantitative data where ligand-binding assay alone is not sufficient, for example, quantitation of antidrug antibody complexes. Herein, we present an overview of recent advance in enzyme digestion and immunocapture applicable to protein quantitation.
Limitations of the Porter-Thomas distribution
NASA Astrophysics Data System (ADS)
Weidenmüller, Hans A.
2017-12-01
Data on the distribution of reduced partial neutron widths and on the distribution of total gamma decay widths disagree with the Porter-Thomas distribution (PTD) for reduced partial widths or with predictions of the statistical model. We recall why the disagreement is important: The PTD is a direct consequence of the orthogonal invariance of the Gaussian Orthogonal Ensemble (GOE) of random matrices. The disagreement is reviewed. Two possible causes for violation of orthogonal invariance of the GOE are discussed, and their consequences explored. The disagreement of the distribution of total gamma decay widths with theoretical predictions cannot be blamed on the statistical model.
NASA Astrophysics Data System (ADS)
Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio
2013-05-01
Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.
Markov Chain Analysis of Musical Dice Games
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
NASA Astrophysics Data System (ADS)
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Assessment of composite motif discovery methods.
Klepper, Kjetil; Sandve, Geir K; Abul, Osman; Johansen, Jostein; Drablos, Finn
2008-02-26
Computational discovery of regulatory elements is an important area of bioinformatics research and more than a hundred motif discovery methods have been published. Traditionally, most of these methods have addressed the problem of single motif discovery - discovering binding motifs for individual transcription factors. In higher organisms, however, transcription factors usually act in combination with nearby bound factors to induce specific regulatory behaviours. Hence, recent focus has shifted from single motifs to the discovery of sets of motifs bound by multiple cooperating transcription factors, so called composite motifs or cis-regulatory modules. Given the large number and diversity of methods available, independent assessment of methods becomes important. Although there have been several benchmark studies of single motif discovery, no similar studies have previously been conducted concerning composite motif discovery. We have developed a benchmarking framework for composite motif discovery and used it to evaluate the performance of eight published module discovery tools. Benchmark datasets were constructed based on real genomic sequences containing experimentally verified regulatory modules, and the module discovery programs were asked to predict both the locations of these modules and to specify the single motifs involved. To aid the programs in their search, we provided position weight matrices corresponding to the binding motifs of the transcription factors involved. In addition, selections of decoy matrices were mixed with the genuine matrices on one dataset to test the response of programs to varying levels of noise. Although some of the methods tested tended to score somewhat better than others overall, there were still large variations between individual datasets and no single method performed consistently better than the rest in all situations. The variation in performance on individual datasets also shows that the new benchmark datasets represents a suitable variety of challenges to most methods for module discovery.
Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1988-01-01
A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Barré, J.; Bernardin, C.; Chetrite, R.
2018-02-01
We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
Application of H-matrices method to the calculation of the stress field in a viscoelastic medium
NASA Astrophysics Data System (ADS)
Ohtani, M.; Hirahara, K.
2017-12-01
In SW Japan, the Philippine Sea plate subducts from the south and the large earthquakes around M (Magnitude) 8 repeatedly occur at the plate boundary along the Nankai Trough, called as Nankai/Tonankai earthquakes. Near the rupture area of these earthquakes, the active volcanoes lines in the Kyushu region SW Japan, such as Sakurajima volcano. There are also distributed in the Tokai-Kanto region SE Japan, such as Mt. Fuji. The eruption of Mt. Fuji in 1707, called as Hoei eruption, have occurred 49 days after the one of the series of Nankai/Tonankai earthquakes, 1707 Hoei earthquake (M8.4). It suggests that the stress field due to the earthquake sometimes helps the volcanoes to erupt. When we consider the stress change due to the earthquake, the effect of viscoelastic deformation of the crust will be important. FEM is always used for modeling such inelastic effect. However, it requires the high computational cost of O(N3), where N is the number of discretized cells of the inelastic medium. Recently, a new method based on BIEM is proposed by Barbot and Fialko (2010). In their method, calculation of the stress field due to the inelastic strain is replaced to solve the inhomogeneous Navier's equation with equivalent body forces of the inelastic strain. Then, using the stress-strain greenfunction in an elastic medium, we can take into account the inelastic effect. In this study, we employ their method to evaluate the stress change at the active volcanoes around the Nankai/Tonankai earthquakes. Their method requires the computational cost and memory storage of O(N2). We try to reduce the computational amount and the memory by applying the fast computation method of H-matrices method. With H-matrices method, a dense matrix is divided into hierarchical structure of submatrices, and each submatrix is approximated to be low rank. When we divide the viscoelastic medium into N = 8,640 or 69,120 uniform cuboid cells and apply the H-matrices method, the required storage memory for the matrices of stress-strain greenfunction are reduced to 0.17 times or 0.05 times of those for the uncompressed original matrices with enough accuracy. In this study, using this method, we show the time development of the stress change at the volcanoes around the Nankai/Tonankai earthquakes, assuming the simple viscos structure.
Random walks with long-range steps generated by functions of Laplacian matrices
NASA Astrophysics Data System (ADS)
Riascos, A. P.; Michelitsch, T. M.; Collet, B. A.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2018-04-01
In this paper, we explore different Markovian random walk strategies on networks with transition probabilities between nodes defined in terms of functions of the Laplacian matrix. We generalize random walk strategies with local information in the Laplacian matrix, that describes the connections of a network, to a dynamic determined by functions of this matrix. The resulting processes are non-local allowing transitions of the random walker from one node to nodes beyond its nearest neighbors. We find that only two types of Laplacian functions are admissible with distinct behaviors for long-range steps in the infinite network limit: type (i) functions generate Brownian motions, type (ii) functions Lévy flights. For this asymptotic long-range step behavior only the lowest non-vanishing order of the Laplacian function is relevant, namely first order for type (i), and fractional order for type (ii) functions. In the first part, we discuss spectral properties of the Laplacian matrix and a series of relations that are maintained by a particular type of functions that allow to define random walks on any type of undirected connected networks. Once described general properties, we explore characteristics of random walk strategies that emerge from particular cases with functions defined in terms of exponentials, logarithms and powers of the Laplacian as well as relations of these dynamics with non-local strategies like Lévy flights and fractional transport. Finally, we analyze the global capacity of these random walk strategies to explore networks like lattices and trees and different types of random and complex networks.
Finite plateau in spectral gap of polychromatic constrained random networks
NASA Astrophysics Data System (ADS)
Avetisov, V.; Gorsky, A.; Nechaev, S.; Valba, O.
2017-12-01
We consider critical behavior in the ensemble of polychromatic Erdős-Rényi networks and regular random graphs, where network vertices are painted in different colors. The links can be randomly removed and added to the network subject to the condition of the vertex degree conservation. In these constrained graphs we run the Metropolis procedure, which favors the connected unicolor triads of nodes. Changing the chemical potential, μ , of such triads, for some wide region of μ , we find the formation of a finite plateau in the number of intercolor links, which exactly matches the finite plateau in the network algebraic connectivity (the value of the first nonvanishing eigenvalue of the Laplacian matrix, λ2). We claim that at the plateau the spontaneously broken Z2 symmetry is restored by the mechanism of modes collectivization in clusters of different colors. The phenomena of a finite plateau formation holds also for polychromatic networks with M ≥2 colors. The behavior of polychromatic networks is analyzed via the spectral properties of their adjacency and Laplacian matrices.
Renormalized Energy Concentration in Random Matrices
NASA Astrophysics Data System (ADS)
Borodin, Alexei; Serfaty, Sylvia
2013-05-01
We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.
Analysis of cross-correlations between financial markets after the 2008 crisis
NASA Astrophysics Data System (ADS)
Sensoy, A.; Yuksel, S.; Erturk, M.
2013-10-01
We analyze the cross-correlation matrix C of the index returns of the main financial markets after the 2008 crisis using methods of random matrix theory. We test the eigenvalues of C for universal properties of random matrices and find that the majority of the cross-correlation coefficients arise from randomness. We show that the eigenvector of the largest deviating eigenvalue of C represents a global market itself. We reveal that high volatility of financial markets is observed at the same times with high correlations between them which lowers the risk diversification potential even if one constructs a widely internationally diversified portfolio of stocks. We identify and compare the connection and cluster structure of markets before and after the crisis using minimal spanning and ultrametric hierarchical trees. We find that after the crisis, the co-movement degree of the markets increases. We also highlight the key financial markets of pre and post crisis using main centrality measures and analyze the changes. We repeat the study using rank correlation and compare the differences. Further implications are discussed.
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
NASA Astrophysics Data System (ADS)
Bonzom, Valentin
2016-07-01
We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.
NASA Astrophysics Data System (ADS)
Li, Dafa
2018-06-01
We construct ℓ -spin-flipping matrices from the coefficient matrices of pure states of n qubits and show that the ℓ -spin-flipping matrices are congruent and unitary congruent whenever two pure states of n qubits are SLOCC and LU equivalent, respectively. The congruence implies the invariance of ranks of the ℓ -spin-flipping matrices under SLOCC and then permits a reduction of SLOCC classification of n qubits to calculation of ranks of the ℓ -spin-flipping matrices. The unitary congruence implies the invariance of singular values of the ℓ -spin-flipping matrices under LU and then permits a reduction of LU classification of n qubits to calculation of singular values of the ℓ -spin-flipping matrices. Furthermore, we show that the invariance of singular values of the ℓ -spin-flipping matrices Ω 1^{(n)} implies the invariance of the concurrence for even n qubits and the invariance of the n-tangle for odd n qubits. Thus, the concurrence and the n-tangle can be used for LU classification and computing the concurrence and the n-tangle only performs additions and multiplications of coefficients of states.
Matrix computations in MACSYMA
NASA Technical Reports Server (NTRS)
Wang, P. S.
1977-01-01
Facilities built into MACSYMA for manipulating matrices with numeric or symbolic entries are described. Computations will be done exactly, keeping symbols as symbols. Topics discussed include how to form a matrix and create other matrices by transforming existing matrices within MACSYMA; arithmetic and other computation with matrices; and user control of computational processes through the use of optional variables. Two algorithms designed for sparse matrices are given. The computing times of several different ways to compute the determinant of a matrix are compared.
Complex symmetric matrices with strongly stable iterates
NASA Technical Reports Server (NTRS)
Tadmor, E.
1985-01-01
Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.
Select-divide-and-conquer method for large-scale configuration interaction
NASA Astrophysics Data System (ADS)
Bunge, Carlos F.; Carbó-Dorca, Ramon
2006-07-01
A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,…,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0≡{T0egy,T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates Ks with attributes above T1⩽T0. An eigenproblem of dimension d0+d1 for S0+S1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j ⩾2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0,1,2,…,R} regulate accuracy; for large-dimensional S, high accuracy requires S0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24×106, involving 1.2×1012 nonzero matrix elements, and 8.4×109 Slater determinants.
Innovative compact focal plane array for wide field vis and ir orbiting telescopes
NASA Astrophysics Data System (ADS)
Hugot, Emmanuel; Vives, Sébastien; Ferrari, Marc; Gaeremynck, Yann; Jahn, Wilfried
2017-11-01
The future generation of high angular resolution space telescopes will require breakthrough technologies to combine large diameters and large focal plane arrays with compactness and lightweight mirrors and structures. Considering the allocated volume medium-size launchers, short focal lengths are mandatory, implying complex optical relays to obtain diffraction limited images on large focal planes. In this paper we present preliminary studies to obtain compact focal plane arrays (FPA) for earth observations on low earth orbits at high angular resolution. Based on the principle of image slicers, we present an optical concept to arrange a 1D FPA into a 2D FPA, allowing the use of 2D detector matrices. This solution is particularly attractive for IR imaging requiring a cryostat, which volume could be considerably reduced as well as the relay optics complexity. Enabling the use of 2D matrices for such an application offers new possibilities. Recent developments on curved FPA allows optimization without concerns on the field curvature. This innovative approach also reduces the complexity of the telescope optical combination, specifically for fast telescopes. This paper will describe the concept and optical design of an F/5 - 1.5m telescope equipped with such a FPA, the performances and the impact on the system with a comparison with an equivalent 1.5m wide field Korsch telescope.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Hassanzadeh, Iman; Tabatabaei, Mohammad
2017-03-28
In this paper, controllability and observability matrices for pseudo upper or lower triangular multi-order fractional systems are derived. It is demonstrated that these systems are controllable and observable if and only if their controllability and observability matrices are full rank. In other words, the rank of these matrices should be equal to the inner dimension of their corresponding state space realizations. To reduce the computational complexities, these matrices are converted to simplified matrices with smaller dimensions. Numerical examples are provided to show the usefulness of the mentioned matrices for controllability and observability analysis of this case of multi-order fractional systems. These examples clarify that the duality concept is not necessarily true for these special systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
On the ground state energy of the delta-function Fermi gas
NASA Astrophysics Data System (ADS)
Tracy, Craig A.; Widom, Harold
2016-10-01
The weak coupling asymptotics to order γ of the ground state energy of the delta-function Fermi gas, derived heuristically in the literature, is here made rigorous. Further asymptotics are in principle computable. The analysis applies to the Gaudin integral equation, a method previously used by one of the authors for the asymptotics of large Toeplitz matrices.
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; ...
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less