Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
Streaming PCA with many missing entries.
DOT National Transportation Integrated Search
2015-12-01
This paper considers the problem of matrix completion when some number of the columns are : completely and arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard : algorithms for matrix completion can return arbit...
Tensor completion for estimating missing values in visual data.
Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping
2013-01-01
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Nguyen, Thu L. N.; Shin, Yoan
2016-01-01
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
Fast and accurate matrix completion via truncated nuclear norm regularization.
Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei
2013-09-01
Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
A Note on Alternating Minimization Algorithm for the Matrix Completion Problem
Gamarnik, David; Misra, Sidhant
2016-06-06
Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less
Emergency Entry with One Control Torque: Non-Axisymmetric Diagonal Inertia Matrix
NASA Technical Reports Server (NTRS)
Llama, Eduardo Garcia
2011-01-01
In another work, a method was presented, primarily conceived as an emergency backup system, that addressed the problem of a space capsule that needed to execute a safe atmospheric entry from an arbitrary initial attitude and angular rate in the absence of nominal control capability. The proposed concept permits the arrest of a tumbling motion, orientation to the heat shield forward position and the attainment of a ballistic roll rate of a rigid spacecraft with the use of control in one axis only. To show the feasibility of such concept, the technique of single input single output (SISO) feedback linearization using the Lie derivative method was employed and the problem was solved for different number of jets and for different configurations of the inertia matrix: the axisymmetric inertia matrix (I(sub xx) > I(sub yy) = I(sub zz)), a partially complete inertia matrix with I(sub xx) > I(sub yy) > I(sub zz), I(sub xz) not = 0 and a realistic complete inertia matrix with I(sub xx) > I(sub yy) > I)sub zz), I(sub ij) not= 0. The closed loop stability of the proposed non-linear control on the total angle of attack, Theta, was analyzed through the zero dynamics of the internal dynamics for the case where the inertia matrix is axisymmetric (I(sub xx) > I(sub yy) = I(sub zz)). This note focuses on the problem of the diagonal non-axisymmetric inertia matrix (I(sub xx) > I(sub yy) > I(sub zz)), which is half way between the axisymmetric and the partially complete inertia matrices. In this note, the control law for this type of inertia matrix will be determined and its closed-loop stability will be analyzed using the same methods that were used in the other work. In particular, it will be proven that the control system is stable in closed-loop when the actuators only provide a roll torque.
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
METCAN demonstration manual, version 1.0
NASA Technical Reports Server (NTRS)
Lee, H.-J.; Murthy, P. L. N.
1992-01-01
The various features of the Metal Matrix Composite Analyzer (METCAN) computer program to simulate the high temperature nonlinear behavior of continuous fiber reinforced metal matrix composites are demonstrated. Different problems are used to demonstrate various capabilities of METCAN for both static and cyclic analyses. A complete description of the METCAN output file is also included to help interpret results.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Post-Doctoral Fellowship for Merton S. Krause. Final Report.
ERIC Educational Resources Information Center
Jackson, Philip W.
The final quarter of Krause's fellowship year was spent in completing his interviews with political socialization researchers in the eastern United States and his work on methodological problems. Krause also completed a long essay on the nature and implications of the "matrix perspective" for research planning, pursued his study of measurement…
Elementary solutions of coupled model equations in the kinetic theory of gases
NASA Technical Reports Server (NTRS)
Kriese, J. T.; Siewert, C. E.; Chang, T. S.
1974-01-01
The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems
NASA Technical Reports Server (NTRS)
Murthy, V. R.; Shultz, Louis A.
1994-01-01
The goal of this research is to develop the transfer matrix method to treat nonlinear autonomous boundary value problems with multiple branches. The application is the complete nonlinear aeroelastic analysis of multiple-branched rotor blades. Once the development is complete, it can be incorporated into the existing transfer matrix analyses. There are several difficulties to be overcome in reaching this objective. The conventional transfer matrix method is limited in that it is applicable only to linear branch chain-like structures, but consideration of multiple branch modeling is important for bearingless rotors. Also, hingeless and bearingless rotor blade dynamic characteristics (particularly their aeroelasticity problems) are inherently nonlinear. The nonlinear equations of motion and the multiple-branched boundary value problem are treated together using a direct transfer matrix method. First, the formulation is applied to a nonlinear single-branch blade to validate the nonlinear portion of the formulation. The nonlinear system of equations is iteratively solved using a form of Newton-Raphson iteration scheme developed for differential equations of continuous systems. The formulation is then applied to determine the nonlinear steady state trim and aeroelastic stability of a rotor blade in hover with two branches at the root. A comprehensive computer program is developed and is used to obtain numerical results for the (1) free vibration, (2) nonlinearly deformed steady state, (3) free vibration about the nonlinearly deformed steady state, and (4) aeroelastic stability tasks. The numerical results obtained by the present method agree with results from other methods.
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Baggetta, Peter; Doyle, Stephanie; Loughlin, Sandra M.
2014-01-01
The purpose of this study was to demonstrate that transfer ability (positive and negative) varies depending on the nature of the problems, using the knowledge transfer matrix, as well as being dependent on the individual differences of the learner. A total of 178 participants from the United States and New Zealand completed measures of prior…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, Seth, E-mail: seth.olsen@uq.edu.au
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less
Olsen, Seth
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
Learning to rank image tags with limited training examples.
Songhe Feng; Zheyun Feng; Rong Jin
2015-04-01
With an increasing number of images that are available in social media, image annotation has emerged as an important research topic due to its application in image matching and retrieval. Most studies cast image annotation into a multilabel classification problem. The main shortcoming of this approach is that it requires a large number of training images with clean and complete annotations in order to learn a reliable model for tag prediction. We address this limitation by developing a novel approach that combines the strength of tag ranking with the power of matrix recovery. Instead of having to make a binary decision for each tag, our approach ranks tags in the descending order of their relevance to the given image, significantly simplifying the problem. In addition, the proposed method aggregates the prediction models for different tags into a matrix, and casts tag ranking into a matrix recovery problem. It introduces the matrix trace norm to explicitly control the model complexity, so that a reliable prediction model can be learned for tag ranking even when the tag space is large and the number of training images is limited. Experiments on multiple well-known image data sets demonstrate the effectiveness of the proposed framework for tag ranking compared with the state-of-the-art approaches for image annotation and tag ranking.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
Approximate method of variational Bayesian matrix factorization/completion with sparse prior
NASA Astrophysics Data System (ADS)
Kawasumi, Ryota; Takeda, Koujin
2018-05-01
We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.
Galleske, I; Castellanos, J
2002-05-01
This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.
Langenbucher, Frieder
2005-01-01
A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
NASA Astrophysics Data System (ADS)
Pezelier, Baptiste
2018-02-01
In this proceeding, we recall the notion of quantum integrable systems on a lattice and then introduce the Sklyanin’s Separation of Variables method. We sum up the main results for the transfer matrix spectral problem for the cyclic representations of the trigonometric 6-vertex reflection algebra associated to the Bazanov-Stroganov Lax operator. These results apply as well to the spectral analysis of the lattice sine-Gordon model with open boundary conditions. The transfer matrix spectrum (both eigenvalues and eigenstates) is completely characterized in terms of the set of solutions to a discrete system of polynomial equations. We state an equivalent characterization as the set of solutions to a Baxter’s like T-Q functional equation, allowing us to rewrite the transfer matrix eigenstates in an algebraic Bethe ansatz form.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
A new pre-loaded beam geometric stiffness matrix with full rigid body capabilities
NASA Astrophysics Data System (ADS)
Bosela, P. A.; Fertis, D. G.; Shaker, F. J.
1992-09-01
Space structures, such as the Space Station solar arrays, must be extremely light-weight, flexible structures. Accurate prediction of the natural frequencies and mode shapes is essential for determining the structural adequacy of components, and designing a controls system. The tension pre-load in the 'blanket' of photovoltaic solar collectors, and the free/free boundary conditions of a structure in space, causes serious reservations on the use of standard finite element techniques of solution. In particular, a phenomenon known as 'grounding', or false stiffening, of the stiffness matrix occurs during rigid body rotation. The authors have previously shown that the grounding phenomenon is caused by a lack of rigid body rotational capability, and is typical in beam geometric stiffness matrices formulated by others, including those which contain higher order effects. The cause of the problem was identified as the force imbalance inherent in the formulations. In this paper, the authors develop a beam geometric stiffness matrix for a directed force problem, and show that the resultant global stiffness matrix contains complete rigid body mode capabilities, and performs very well in the diagonalization methodology customarily used in dynamic analysis.
The use of an analytic Hamiltonian matrix for solving the hydrogenic atom
NASA Astrophysics Data System (ADS)
Bhatti, Mohammad
2001-10-01
The non-relativistic Hamiltonian corresponding to the Shrodinger equation is converted into analytic Hamiltonian matrix using the kth order B-splines functions. The Galerkin method is applied to the solution of the Shrodinger equation for bound states of hydrogen-like systems. The program Mathematica is used to create analytic matrix elements and exact integration is performed over the knot-sequence of B-splines and the resulting generalized eigenvalue problem is solved on a specified numerical grid. The complete basis set and the energy spectrum is obtained for the coulomb potential for hydrogenic systems with Z less than 100 with B-splines of order eight. Another application is given to test the Thomas-Reiche-Kuhn sum rule for the hydrogenic systems.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, Shiva Prasad; Pan, Feng
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer
Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo
2014-01-01
A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110
Kalinowski, Jarosław A.; Makal, Anna; Coppens, Philip
2011-01-01
A new method for determination of the orientation matrix of Laue X-ray data is presented. The method is based on matching of the experimental patterns of central reciprocal lattice rows projected on a unit sphere centered on the origin of the reciprocal lattice with the corresponding pattern of a monochromatic data set on the same material. This technique is applied to the complete data set and thus eliminates problems often encountered when single frames with a limited number of peaks are to be used for orientation matrix determination. Application of the method to a series of Laue data sets on organometallic crystals is described. The corresponding program is available under a Mozilla Public License-like open-source license. PMID:22199400
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Sequencing of Dust Filter Production Process Using Design Structure Matrix (DSM)
NASA Astrophysics Data System (ADS)
Sari, R. M.; Matondang, A. R.; Syahputri, K.; Anizar; Siregar, I.; Rizkya, I.; Ursula, C.
2018-01-01
Metal casting company produces machinery spare part for manufactures. One of the product produced is dust filter. Most of palm oil mill used this product. Since it is used in most of palm oil mill, company often have problems to address this product. One of problem is the disordered of production process. It carried out by the job sequencing. The important job that should be solved first, least implement, while less important job and could be completed later, implemented first. Design Structure Matrix (DSM) used to analyse and determine priorities in the production process. DSM analysis is sort of production process through dependency sequencing. The result of dependency sequences shows the sequence process according to the inter-process linkage considering before and after activities. Finally, it demonstrates their activities to the coupled activities for metal smelting, refining, grinding, cutting container castings, metal expenditure of molds, metal casting, coating processes, and manufacture of molds of sand.
Finite element solution for energy conservation using a highly stable explicit integration algorithm
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1972-01-01
Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.
NASA Astrophysics Data System (ADS)
Polydorides, Nick; Lionheart, William R. B.
2002-12-01
The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.
Tsuchiyama, Tomoyuki; Katsuhara, Miki; Nakajima, Masahiro
2017-11-17
In the multi-residue analysis of pesticides using GC-MS, the quantitative results are adversely affected by a phenomenon known as the matrix effect. Although the use of matrix-matched standards is considered to be one of the most practical solutions to this problem, complete removal of the matrix effect is difficult in complex food matrices owing to their inconsistency. As a result, residual matrix effects can introduce analytical errors. To compensate for residual matrix effects, we have developed a novel method that employs multiple isotopically labeled internal standards (ILIS). The matrix effects of ILIS and pesticides were evaluated in spiked matrix extracts of various agricultural commodities, and the obtained data were subjected to simple statistical analysis. Based on the similarities between the patterns of variation in the analytical response, a total of 32 isotopically labeled compounds were assigned to 338 pesticides as internal standards. It was found that by utilizing multiple ILIS, residual matrix effects could be effectively compensated. The developed method exhibited superior quantitative performance compared with the common single-internal-standard method. The proposed method is more feasible for regulatory purposes than that using only predetermined correction factors and is considered to be promising for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Scarano, Antonio; Barros, Raquel R M; Iezzi, Giovanna; Piattelli, Adriano; Novaes, Arthur B
2009-02-01
The aim of this study was to evaluate clinically, histologically, and ultrastructurally the integration process of the acellular dermal matrix used to increase the band of keratinized tissue while achieving gingival inflammation control. Ten patients exhibiting a mucogingival problem with bands of keratinized tissue
Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei
2017-12-01
As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.
The Power of Implicit Social Relation in Rating Prediction of Social Recommender Systems
Reafee, Waleed; Salim, Naomie; Khan, Atif
2016-01-01
The explosive growth of social networks in recent times has presented a powerful source of information to be utilized as an extra source for assisting in the social recommendation problems. The social recommendation methods that are based on probabilistic matrix factorization improved the recommendation accuracy and partly solved the cold-start and data sparsity problems. However, these methods only exploited the explicit social relations and almost completely ignored the implicit social relations. In this article, we firstly propose an algorithm to extract the implicit relation in the undirected graphs of social networks by exploiting the link prediction techniques. Furthermore, we propose a new probabilistic matrix factorization method to alleviate the data sparsity problem through incorporating explicit friendship and implicit friendship. We evaluate our proposed approach on two real datasets, Last.Fm and Douban. The experimental results show that our method performs much better than the state-of-the-art approaches, which indicates the importance of incorporating implicit social relations in the recommendation process to address the poor prediction accuracy. PMID:27152663
The automated multi-stage substructuring system for NASTRAN
NASA Technical Reports Server (NTRS)
Field, E. I.; Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.
1975-01-01
The substructuring capability developed for eventual installation in Level 16 is now operational in a test version of NASTRAN. Its features are summarized. These include the user-oriented, Case Control type control language, the automated multi-stage matrix processing, the independent direct access data storage facilities, and the static and normal modes solution capabilities. A complete problem analysis sequence is presented with card-by-card description of the user input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, D.C.
Nuclear Fuel Services sent more than 800 drums of nuclear waste to Oak Ridge National Laboratory, with the majority of the waste packaged into five different waste matrix types. A thorough and complete assay of the waste was performed at both NFS and at ORNL. A detailed comparing of the two assay sets provides valuable. insights into problems encountered in typical assay campaigns, particularly as there is, for the most part, excellent agreement between these two campaigns.
An algorithm for the basis of the finite Fourier transform
NASA Technical Reports Server (NTRS)
Santhanam, Thalanayar S.
1995-01-01
The Finite Fourier Transformation matrix (F.F.T.) plays a central role in the formulation of quantum mechanics in a finite dimensional space studied by the author over the past couple of decades. An outstanding problem which still remains open is to find a complete basis for F.F.T. In this paper we suggest a simple algorithm to find the eigenvectors of F.T.T.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
Sparse matrix methods research using the CSM testbed software system
NASA Technical Reports Server (NTRS)
Chu, Eleanor; George, J. Alan
1989-01-01
Research is described on sparse matrix techniques for the Computational Structural Mechanics (CSM) Testbed. The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the CSM Testbed. Thus, one of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A suite of subroutines to extract from the data base the relevant structural and numerical information about the matrix equations was written, and all the demonstration problems distributed with the testbed were successfully solved. These codes were documented, and performance studies comparing the SPARSPAK technology to the methods currently in the testbed were completed. In addition, some preliminary studies were done comparing some recently developed out-of-core techniques with the performance of the testbed processor INV.
NASA Astrophysics Data System (ADS)
Razgulin, A. V.; Sazonova, S. V.
2017-09-01
A novel statement of the Fourier filtering problem based on the use of matrix Fourier filters instead of conventional multiplier filters is considered. The basic properties of the matrix Fourier filtering for the filters in the Hilbert-Schmidt class are established. It is proved that the solutions with a finite energy to the periodic initial boundary value problem for the quasi-linear functional differential diffusion equation with the matrix Fourier filtering Lipschitz continuously depend on the filter. The problem of optimal matrix Fourier filtering is formulated, and its solvability for various classes of matrix Fourier filters is proved. It is proved that the objective functional is differentiable with respect to the matrix Fourier filter, and the convergence of a version of the gradient projection method is also proved.
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Processing and problems in manufacturing a Ti-modified Nb/sub 3/Sn MJR billet. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, W.K.; Smathers, D.; Geno, J.D.
1985-06-18
This report is submitted to complete Task II of University of California Order Number 4321405. Task I had Teledyne Wah Chang Albany (TWCA) assemble and process by the Modified Jelly Roll (MJR) method a Ti-modified Nb/sub 3/Sn superconductor billet. This billet was identified as M103 by TWCA. The billet matrix is nominally composed of copper 13.5 wt % tin bronze sheet and niobium 1.2 wt % titanium expanded metal with a volume ratio of three parts bronze to one part niobium alloy. All processing steps and problems encountered in manufacturing billet M103 are described in this report.
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.
Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben
2017-08-02
It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.
Fast matrix multiplication and its algebraic neighbourhood
NASA Astrophysics Data System (ADS)
Pan, V. Ya.
2017-11-01
Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Matrix with Prescribed Eigenvectors
ERIC Educational Resources Information Center
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
M-matrices with prescribed elementary divisors
NASA Astrophysics Data System (ADS)
Soto, Ricardo L.; Díaz, Roberto C.; Salas, Mario; Rojo, Oscar
2017-09-01
A real matrix A is said to be an M-matrix if it is of the form A=α I-B, where B is a nonnegative matrix with Perron eigenvalue ρ (B), and α ≥slant ρ (B) . This paper provides sufficient conditions for the existence and construction of an M-matrix A with prescribed elementary divisors, which are the characteristic polynomials of the Jordan blocks of the Jordan canonical form of A. This inverse problem on M-matrices has not been treated until now. We solve the inverse elementary divisors problem for diagonalizable M-matrices and the symmetric generalized doubly stochastic inverse M-matrix problem for lists of real numbers and for lists of complex numbers of the form Λ =\\{λ 1, a+/- bi, \\ldots, a+/- bi\\} . The constructive nature of our results allows for the computation of a solution matrix. The paper also discusses an application of M-matrices to a capacity problem in wireless communications.
Attention Problems and Stability of WISC-IV Scores Among Clinically Referred Children.
Green Bartoi, Marla; Issner, Jaclyn Beth; Hetterscheidt, Lesley; January, Alicia M; Kuentzel, Jeffrey Garth; Barnett, Douglas
2015-01-01
We examined the stability of Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) scores among 51 diverse, clinically referred 8- to 16-year-olds (M(age) = 11.24 years, SD = 2.36). Children were referred to and tested at an urban, university-based training clinic; 70% of eligible children completed follow-up testing 12 months to 40 months later (M = 22.05, SD = 5.94). Stability for index scores ranged from .58 (Processing Speed) to .81 (Verbal Comprehension), with a stability of .86 for Full-Scale IQ. Subtest score stability ranged from .35 (Letter-Number Sequencing) to .81 (Vocabulary). Indexes believed to be more susceptible to concentration (Processing Speed and Working Memory) had lower stability. We also examined attention problems as a potential moderating factor of WISC-IV index and subtest score stability. Children with attention problems had significantly lower stability for Digit Span and Matrix Reasoning subtests compared with children without attention problems. These results provide support for the temporal stability of the WISC-IV and also provide some support for the idea that attention problems contribute to children producing less stable IQ estimates when completing the WISC-IV. We hope our report encourages further examination of this hypothesis and its implications.
Structured Matrix Completion with Applications to Genomic Data Integration.
Cai, Tianxi; Cai, T Tony; Zhang, Anru
2016-01-01
Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Nonconvex Model of Material Growth: Mathematical Theory
NASA Astrophysics Data System (ADS)
Ganghoffer, J. F.; Plotnikov, P. I.; Sokolowski, J.
2018-06-01
The model of volumetric material growth is introduced in the framework of finite elasticity. The new results obtained for the model are presented with complete proofs. The state variables include the deformations, temperature and the growth factor matrix function. The existence of global in time solutions for the quasistatic deformations boundary value problem coupled with the energy balance and the evolution of the growth factor is shown. The mathematical results can be applied to a wide class of growth models in mechanics and biology.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
NASA Astrophysics Data System (ADS)
Hamed, Haikel Ben; Bennacer, Rachid
2008-08-01
This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
Table-sized matrix model in fractional learning
NASA Astrophysics Data System (ADS)
Soebagyo, J.; Wahyudin; Mulyaning, E. C.
2018-05-01
This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
Curvature controlled wetting in two dimensions
NASA Astrophysics Data System (ADS)
Gil, Tamir; Mikheev, Lev V.
1995-07-01
A complete wetting transition at vanishing curvature of the substrate in two-dimensional circular geometry is studied by the transfer matrix method. We find an exact formal mapping of the partition function of the problem onto that of a (1+1)-dimensional wetting problem in planar geometry. As the radius of the substrate r0-->∞, the leading effect of the curvature is adding the Laplace pressure ΠL~r-10 to the pressure balance in the film. At temperatures and pressures under which the wetting is complete in planar geometry, Laplace pressure suppresses divergence of the mean thickness of the wetting layer lW, leading to a power law lW~r1/30. At a critical wetting transition of a planar substrate, curvature adds a relevant field; the corresponding multiscaling forms are readily available. The method allows for the systematic evaluation of corrections to the leading behavior; the next to the leading term reduces the thickness by the amount proportional to r-1/30
A penny shaped crack in a filament-reinforced matrix. 2: The crack problem
NASA Technical Reports Server (NTRS)
Pacella, A. H.; Erdogan, F.
1973-01-01
The elastostatic interaction problem between a penny-shaped crack and a slender inclusion or filament in an elastic matrix was formulated. For a single filament as well as multiple identical filaments located symmetrically around the crack the problem is shown to reduce to a singular integral equation. The solution of the problem is obtained for various geometries and filament-to-matrix stiffness ratios, and the results relating to the angular variation of the stress intensity factor and the maximum filament stress are presented.
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
NASA Astrophysics Data System (ADS)
Lavery, N.; Taylor, C.
1999-07-01
Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
Time-reversal symmetric resolution of unity without background integrals in open quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatano, Naomichi, E-mail: hatano@iis.u-tokyo.ac.jp; Ordonez, Gonzalo, E-mail: gordonez@butler.edu
2014-12-15
We present a new complete set of states for a class of open quantum systems, to be used in expansion of the Green’s function and the time-evolution operator. A remarkable feature of the complete set is that it observes time-reversal symmetry in the sense that it contains decaying states (resonant states) and growing states (anti-resonant states) parallelly. We can thereby pinpoint the occurrence of the breaking of time-reversal symmetry at the choice of whether we solve Schrödinger equation as an initial-condition problem or a terminal-condition problem. Another feature of the complete set is that in the subspace of the centralmore » scattering area of the system, it consists of contributions of all states with point spectra but does not contain any background integrals. In computing the time evolution, we can clearly see contribution of which point spectrum produces which time dependence. In the whole infinite state space, the complete set does contain an integral but it is over unperturbed eigenstates of the environmental area of the system and hence can be calculated analytically. We demonstrate the usefulness of the complete set by computing explicitly the survival probability and the escaping probability as well as the dynamics of wave packets. The origin of each term of matrix elements is clear in our formulation, particularly, the exponential decays due to the resonance poles.« less
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
The Rigid Orthogonal Procrustes Rotation Problem
ERIC Educational Resources Information Center
ten Berge, Jos M. F.
2006-01-01
The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…
Distributed formation control of nonholonomic autonomous vehicle via RBF neural network
NASA Astrophysics Data System (ADS)
Yang, Shichun; Cao, Yaoguang; Peng, Zhaoxia; Wen, Guoguang; Guo, Konghui
2017-03-01
In this paper, RBF neural network consensus-based distributed control scheme is proposed for nonholonomic autonomous vehicles in a pre-defined formation along the specified reference trajectory. A variable transformation is first designed to convert the formation control problem into a state consensus problem. Then, the complete dynamics of the vehicles including inertia, Coriolis, friction model and unmodeled bounded disturbances are considered, which lead to the formation unstable when the distributed kinematic controllers are proposed based on the kinematics. RBF neural network torque controllers are derived to compensate for them. Some sufficient conditions are derived to accomplish the asymptotically stability of the systems based on algebraic graph theory, matrix theory, and Lyapunov theory. Finally, simulation examples illustrate the effectiveness of the proposed controllers.
Observability during planetary approach navigation
NASA Technical Reports Server (NTRS)
Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.
1993-01-01
The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.
Teasdale, Luisa C; Köhler, Frank; Murray, Kevin D; O'Hara, Tim; Moussalli, Adnan
2016-09-01
The qualification of orthology is a significant challenge when developing large, multiloci phylogenetic data sets from assembled transcripts. Transcriptome assemblies have various attributes, such as fragmentation, frameshifts and mis-indexing, which pose problems to automated methods of orthology assessment. Here, we identify a set of orthologous single-copy genes from transcriptome assemblies for the land snails and slugs (Eupulmonata) using a thorough approach to orthology determination involving manual alignment curation, gene tree assessment and sequencing from genomic DNA. We qualified the orthology of 500 nuclear, protein-coding genes from the transcriptome assemblies of 21 eupulmonate species to produce the most complete phylogenetic data matrix for a major molluscan lineage to date, both in terms of taxon and character completeness. Exon capture targeting 490 of the 500 genes (those with at least one exon >120 bp) from 22 species of Australian Camaenidae successfully captured sequences of 2825 exons (representing all targeted genes), with only a 3.7% reduction in the data matrix due to the presence of putative paralogs or pseudogenes. The automated pipeline Agalma retrieved the majority of the manually qualified 500 single-copy gene set and identified a further 375 putative single-copy genes, although it failed to account for fragmented transcripts resulting in lower data matrix completeness when considering the original 500 genes. This could potentially explain the minor inconsistencies we observed in the supported topologies for the 21 eupulmonate species between the manually curated and 'Agalma-equivalent' data set (sharing 458 genes). Overall, our study confirms the utility of the 500 gene set to resolve phylogenetic relationships at a range of evolutionary depths and highlights the importance of addressing fragmentation at the homolog alignment stage for probe design. © 2016 John Wiley & Sons Ltd.
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
Fushiki, Tadayoshi
2009-07-01
The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.
ERIC Educational Resources Information Center
Parmar, Rene S.; Cawley, John F.
1994-01-01
Matrix organization can be used to construct math word problems for children with mild disabilities. Matrix organization specifies the characteristics of problems, such as problem theme or setting, operations, level of computation complexity, reading vocabulary level, and need for classification. A sample scope and sequence and 16 sample word…
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
Users manual for the Variable dimension Automatic Synthesis Program (VASP)
NASA Technical Reports Server (NTRS)
White, J. S.; Lee, H. Q.
1971-01-01
A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.
Generalized Choi states and 2-distillability of quantum states
NASA Astrophysics Data System (ADS)
Chen, Lin; Tang, Wai-Shing; Yang, Yu
2018-05-01
We investigate the distillability of bipartite quantum states in terms of positive and completely positive maps. We construct the so-called generalized Choi states and show that it is distillable when it has negative partial transpose. We convert the distillability problem of 2-copy n× n Werner states into the determination of the positivity of an Hermitian matrix. We obtain several sufficient conditions by which the positivity holds. Further, we investigate the case n=3 by the classification of 2× 3× 3 pure states.
New Results in {mathcal {N}}=2 N = 2 Theories from Non-perturbative String
NASA Astrophysics Data System (ADS)
Bonelli, Giulio; Grassi, Alba; Tanzini, Alessandro
2018-03-01
We describe the magnetic phase of SU(N) $\\mathcal{N}=2$ Super Yang-Mills theories in the self-dual Omega background in terms of a new class of multi-cut matrix models. These arise from a non-perturbative completion of topological strings in the dual four dimensional limit which engineers the gauge theory in the strongly coupled magnetic frame. The corresponding spectral determinants provide natural candidates for the tau functions of isomonodromy problems for flat spectral connections associated to the Seiberg-Witten geometry.
A monolithic Lagrangian approach for fluid-structure interaction problems
NASA Astrophysics Data System (ADS)
Ryzhakov, P. B.; Rossi, R.; Idelsohn, S. R.; Oñate, E.
2010-11-01
Current work presents a monolithic method for the solution of fluid-structure interaction problems involving flexible structures and free-surface flows. The technique presented is based upon the utilization of a Lagrangian description for both the fluid and the structure. A linear displacement-pressure interpolation pair is used for the fluid whereas the structure utilizes a standard displacement-based formulation. A slight fluid compressibility is assumed that allows to relate the mechanical pressure to the local volume variation. The method described features a global pressure condensation which in turn enables the definition of a purely displacement-based linear system of equations. A matrix-free technique is used for the solution of such linear system, leading to an efficient implementation. The result is a robust method which allows dealing with FSI problems involving arbitrary variations in the shape of the fluid domain. The method is completely free of spurious added-mass effects.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
Donoho, David L; Gavish, Matan; Montanari, Andrea
2013-05-21
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
Fast Algorithms for Structured Least Squares and Total Least Squares Problems
Kalsi, Anoop; O’Leary, Dianne P.
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922
Fast Algorithms for Structured Least Squares and Total Least Squares Problems.
Kalsi, Anoop; O'Leary, Dianne P
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.
ERIC Educational Resources Information Center
Digital Equipment Corp., Maynard, MA.
The curriculum materials and computer programs in this booklet introduce the idea of a matrix. They go on to discuss matrix operations of addition, subtraction, multiplication by a scalar, and matrix multiplication. The last section covers several contemporary applications of matrix multiplication, including problems of communication…
Finite-time mixed outer synchronization of complex networks with coupling time-varying delay.
He, Ping; Ma, Shu-Hua; Fan, Tao
2012-12-01
This article is concerned with the problem of finite-time mixed outer synchronization (FMOS) of complex networks with coupling time-varying delay. FMOS is a recently developed generalized synchronization concept, i.e., in which different state variables of the corresponding nodes can evolve into finite-time complete synchronization, finite-time anti-synchronization, and even amplitude finite-time death simultaneously for an appropriate choice of the controller gain matrix. Some novel stability criteria for the synchronization between drive and response complex networks with coupling time-varying delay are derived using the Lyapunov stability theory and linear matrix inequalities. And a simple linear state feedback synchronization controller is designed as a result. Numerical simulations for two coupled networks of modified Chua's circuits are then provided to demonstrate the effectiveness and feasibility of the proposed complex networks control and synchronization schemes and then compared with the proposed results and the previous schemes for accuracy.
Strongly contracted canonical transformation theory
NASA Astrophysics Data System (ADS)
Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic
2010-01-01
Canonical transformation (CT) theory describes dynamic correlation in multireference systems with large active spaces. Here we discuss CT theory's intruder state problem and why our previous approach of overlap matrix truncation becomes infeasible for sufficiently large active spaces. We propose the use of strongly and weakly contracted excitation operators as alternatives for dealing with intruder states in CT theory. The performance of these operators is evaluated for the H2O, N2, and NiO molecules, with comparisons made to complete active space second order perturbation theory and Davidson-corrected multireference configuration interaction theory. Finally, using a combination of strongly contracted CT theory and orbital-optimized density matrix renormalization group theory, we evaluate the singlet-triplet gap of free base porphin using an active space containing all 24 out-of-plane 2p orbitals. Modeling dynamic correlation with an active space of this size is currently only possible using CT theory.
Determination of germanium by AAS in chloride-containing matrices.
Anwari, M A; Abbasi, H U; Volkan, M; Ataman, O Y
1996-06-01
Interference effects of NaCl on the ET-AAS determination of Ge have been studied. The use of several matrix modifiers to alleviate this problem such as Ni and Zn perchlorates and nitrates, nitric acid, ammonium nitrate are reported. The stabilizing effect of Zn and Ni perchlorates allows the use of high pretreatment temperatures. NaCl is thus thermally volatilized from the atomizer by employing pretreatment temperatures higher than 1500 degrees C resulting in an improved sensitivity. Germanium levels in zinc plant slag samples, have been determined and compared to those obtained for the same samples spiked with NaCl with platform and wall atomization using nickel perchlorate as a matrix modifier. The results were compared with those from a hydride generation system equipped with a liquid nitrogen trap. The recoveries for germanium have been almost complete and amount to 99% for the original slag samples and 80% for 15% (w/w) NaCl containing spiked samples.
A Chebyshev matrix method for spatial modes of the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Danabasoglu, G.; Biringen, S.
1989-01-01
The Chebyshev matrix collocation method is applied to obtain the spatial modes of the Orr-Sommerfeld equation for Poiseuille flow and the Blausius boundary layer. The problem is linearized by the companion matrix technique for semi-infinite domain using a mapping transformation. The method can be easily adapted to problems with different boundary conditions requiring different transformations.
Estimating the Inertia Matrix of a Spacecraft
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Keim, Jason; Shields, Joel
2007-01-01
A paper presents a method of utilizing some flight data, aboard a spacecraft that includes reaction wheels for attitude control, to estimate the inertia matrix of the spacecraft. The required data are digitized samples of (1) the spacecraft attitude in an inertial reference frame as measured, for example, by use of a star tracker and (2) speeds of rotation of the reaction wheels, the moments of inertia of which are deemed to be known. Starting from the classical equations for conservation of angular momentum of a rigid body, the inertia-matrix-estimation problem is formulated as a constrained least-squares minimization problem with explicit bounds on the inertia matrix incorporated as linear matrix inequalities. The explicit bounds reflect physical bounds on the inertia matrix and reduce the volume of data that must be processed to obtain a solution. The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently, with guaranteed convergence to the global optimum, by use of readily available algorithms. In a test case involving a model attitude platform rotating on an air bearing, it is shown that, relative to a prior method, the present method produces better estimates from few data.
Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability.
Lihe Zhang; Jianwu Ai; Bowen Jiang; Huchuan Lu; Xiukui Li
2018-02-01
In this paper, we propose a bottom-up saliency model based on absorbing Markov chain (AMC). First, a sparsely connected graph is constructed to capture the local context information of each node. All image boundary nodes and other nodes are, respectively, treated as the absorbing nodes and transient nodes in the absorbing Markov chain. Then, the expected number of times from each transient node to all other transient nodes can be used to represent the saliency value of this node. The absorbed time depends on the weights on the path and their spatial coordinates, which are completely encoded in the transition probability matrix. Considering the importance of this matrix, we adopt different hierarchies of deep features extracted from fully convolutional networks and learn a transition probability matrix, which is called learnt transition probability matrix. Although the performance is significantly promoted, salient objects are not uniformly highlighted very well. To solve this problem, an angular embedding technique is investigated to refine the saliency results. Based on pairwise local orderings, which are produced by the saliency maps of AMC and boundary maps, we rearrange the global orderings (saliency value) of all nodes. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art methods on six publicly available benchmark data sets.
Optimal Tikhonov Regularization in Finite-Frequency Tomography
NASA Astrophysics Data System (ADS)
Fang, Y.; Yao, Z.; Zhou, Y.
2017-12-01
The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
Phase diagram of matrix compressed sensing
NASA Astrophysics Data System (ADS)
Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka
2016-12-01
In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.
Extension of modified power method to two-dimensional problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan 44919; Lee, Hyunsuk
2016-09-01
In this study, the generalized modified power method was extended to two-dimensional problems. A direct application of the method to two-dimensional problems was shown to be unstable when the number of requested eigenmodes is larger than a certain problem dependent number. The root cause of this instability has been identified as the degeneracy of the transfer matrix. In order to resolve this instability, the number of sub-regions for the transfer matrix was increased to be larger than the number of requested eigenmodes; and a new transfer matrix was introduced accordingly which can be calculated by the least square method. Themore » stability of the new method has been successfully demonstrated with a neutron diffusion eigenvalue problem and the 2D C5G7 benchmark problem. - Graphical abstract:.« less
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L
2015-01-01
The aim of this study was to investigate the association of neurocognitive functioning with internalizing and externalizing problems and school and social competence in children adopted internationally. Participants included girls between the ages of 6-12 years who were internationally adopted from China (n = 32) or Eastern Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Vocabulary and Matrix Reasoning subtests from the Wechsler Abbreviated Scale of Intelligence and the Score! and Sky Search subtests from the Test of Everyday Attention for Children. Parents completed the Child Behavior Checklist and the Home and Community Social Behavior Scales. Compared to the controls, the Eastern European group evidenced significantly more problems with externalizing behaviors and school and social competence and poorer performance on measures of verbal intelligence, perceptual reasoning, and auditory attention. More internalizing problems were reported in the Chinese group compared to the controls. Using generalized linear regression, interaction terms were examined to determine whether the associations of neurocognitive functioning with behavior varied across groups. Eastern European group status was associated with more externalizing problems and poorer school and social competence, irrespective of neurocognitive test performance. In the Chinese group, poorer auditory attention was associated with more problems with social competence. Neurocognitive functioning may be related to behavior in children adopted internationally. Knowledge about neurocognitive functioning may further our understanding of the impact of early institutionalization on post-adoption behavior.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
NASA Astrophysics Data System (ADS)
Anoukou, K.; Pastor, F.; Dufrenoy, P.; Kondo, D.
2016-06-01
The present two-part study aims at investigating the specific effects of Mohr-Coulomb matrix on the strength of ductile porous materials by using a kinematic limit analysis approach. While in the Part II, static and kinematic bounds are numerically derived and used for validation purpose, the present Part I focuses on the theoretical formulation of a macroscopic strength criterion for porous Mohr-Coulomb materials. To this end, we consider a hollow sphere model with a rigid perfectly plastic Mohr-Coulomb matrix, subjected to axisymmetric uniform strain rate boundary conditions. Taking advantage of an appropriate family of three-parameter trial velocity fields accounting for the specific plastic deformation mechanisms of the Mohr-Coulomb matrix, we then provide a solution of the constrained minimization problem required for the determination of the macroscopic dissipation function. The macroscopic strength criterion is then obtained by means of the Lagrangian method combined with Karush-Kuhn-Tucker conditions. After a careful analysis and discussion of the plastic admissibility condition associated to the Mohr-Coulomb criterion, the above procedure leads to a parametric closed-form expression of the macroscopic strength criterion. The latter explicitly shows a dependence on the three stress invariants. In the special case of a friction angle equal to zero, the established criterion reduced to recently available results for porous Tresca materials. Finally, both effects of matrix friction angle and porosity are briefly illustrated and, for completeness, the macroscopic plastic flow rule and the voids evolution law are fully furnished.
NASA Technical Reports Server (NTRS)
Herb, G. T.
1973-01-01
Two areas of a laser range finder for a Mars roving vehicle are investigated: (1) laser scanning systems, and (2) range finder methods and implementation. Several ways of rapidly scanning a laser are studied. Two digital deflectors and a matrix of laser diodes, are found to be acceptable. A complete range finder scanning system of high accuracy is proposed. The problem of incident laser spot distortion on the terrain is discussed. The instrumentation for a phase comparison, modulated laser range finder is developed and sections of it are tested.
Computing sparse derivatives and consecutive zeros problem
NASA Astrophysics Data System (ADS)
Chandra, B. V. Ravi; Hossain, Shahadat
2013-02-01
We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A review of some problems in global-local stress analysis
NASA Technical Reports Server (NTRS)
Nelson, Richard B.
1989-01-01
The various types of local-global finite-element problems point out the need to develop a new generation of software. First, this new software needs to have a complete analysis capability, encompassing linear and nonlinear analysis of 1-, 2-, and 3-dimensional finite-element models, as well as mixed dimensional models. The software must be capable of treating static and dynamic (vibration and transient response) problems, including the stability effects of initial stress, and the software should be able to treat both elastic and elasto-plastic materials. The software should carry a set of optional diagnostics to assist the program user during model generation in order to help avoid obvious structural modeling errors. In addition, the program software should be well documented so the user has a complete technical reference for each type of element contained in the program library, including information on such topics as the type of numerical integration, use of underintegration, and inclusion of incompatible modes, etc. Some packaged information should also be available to assist the user in building mixed-dimensional models. An important advancement in finite-element software should be in the development of program modularity, so that the user can select from a menu various basic operations in matrix structural analysis.
New algorithms to compute the nearness symmetric solution of the matrix equation.
Peng, Zhen-Yun; Fang, Yang-Zhi; Xiao, Xian-Wei; Du, Dan-Dan
2016-01-01
In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix [Formula: see text] in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix [Formula: see text] is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763-777, 2005) and Peng (Int J Comput Math 87: 1820-1830, 2010).
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.
ERIC Educational Resources Information Center
Steinberg, Esther R.; And Others
This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…
An improved V-Lambda solution of the matrix Riccati equation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Markley, F. Landis
1988-01-01
The authors present an improved algorithm for computing the V-Lambda solution of the matrix Riccati equation. The improvement is in the reduction of the computational load, results from the orthogonality of the eigenvector matrix that has to be solved for. The orthogonality constraint reduces the number of independent parameters which define the matrix from n-squared to n (n - 1)/2. The authors show how to specify the parameters, how to solve for them and how to form from them the needed eigenvector matrix. In the search for suitable parameters, the analogy between the present problem and the problem of attitude determination is exploited, resulting in the choice of Rodrigues parameters.
Borisov, Roman S; Polovkov, Nikolai Yu; Zhilyaev, Dmitry I; Zaikin, Vladimir G
2013-01-30
Herein we describe a strong matrix effect observed in the matrix-assisted laser desorption/ionization time-of-flight (MALDI-ToF) mass spectra of silylated glycerol alkoxylates and manifested in the loss of the silyl groups in the presence of carboxyl-containing matrices. Commercially available glycerol alkoxylates containing three end OH groups as well as three matrices - 2,5-dihydroxybenzoic acid (DHB), 3-indoleacrylic acid (IAA) and 1,8,9-anthracenetriol (dithranol) - were chosen for the investigation. N,O-Bis(trimethylsilyl)trifluoroacetamide containing 1% trimethylchlorosilane, acetic anhydride and a formylation mixture (formic acid/acetyl chloride) were used for derivatization. Initial oligomers and derivatized products were analyzed by MALDI-ToF-mass spectrometry (MS) on an Autoflex II instrument, equipped with a nitrogen laser (λ 337 nm), in positive ion reflectron mode. Only [M + Na](+) ions were observed for underivatized polymers and for completely derivatized polymers in the presence of DHB and dithranol, respectively. In the case of IAA the mass spectra revealed sets of peaks for underivatized, and for partially and completely derivatized oligomers. No similar 'matrix effect' was observed in the case of acylated glycerol alkoxylates (acyl = formyl, acetyl): only peaks for completely derivatized oligomers were obtained in all matrices: DHB, IAA and dithranol. Using 1,9-nonandiol, we showed that the 'matrix effect' was due to trans-silylation of carboxyl-containing matrices (DHB and IAA) during co-crystallization of silylated oligomers and matrices. The obtained results show that matrix molecules can participate as reactive species in MALDI-ToF-MS experiments. The matrix should be carefully chosen when a derivatization approach is applied because the analysis of spectra of the completely derivatized products is particularly desirable in the quantitative determination of functional end-groups. Copyright © 2012 John Wiley & Sons, Ltd.
Matrix Theory of Small Oscillations
ERIC Educational Resources Information Center
Chavda, L. K.
1978-01-01
A complete matrix formulation of the theory of small oscillations is presented. Simple analytic solutions involving matrix functions are found which clearly exhibit the transients, the damping factors, the Breit-Wigner form for resonances, etc. (BB)
Estimating Depolarization with the Jones Matrix Quality Factor
NASA Astrophysics Data System (ADS)
Hilfiker, James N.; Hale, Jeffrey S.; Herzinger, Craig M.; Tiwald, Tom; Hong, Nina; Schöche, Stefan; Arwin, Hans
2017-11-01
Mueller matrix (MM) measurements offer the ability to quantify the depolarization capability of a sample. Depolarization can be estimated using terms such as the depolarization index or the average degree of polarization. However, these calculations require measurement of the complete MM. We propose an alternate depolarization metric, termed the Jones matrix quality factor, QJM, which does not require the complete MM. This metric provides a measure of how close, in a least-squares sense, a Jones matrix can be found to the measured Mueller matrix. We demonstrate and compare the use of QJM to other traditional calculations of depolarization for both isotropic and anisotropic depolarizing samples; including non-uniform coatings, anisotropic crystal substrates, and beetle cuticles that exhibit both depolarization and circular diattenuation.
Investigation and Implementation of Matrix Permanent Algorithms for Identity Resolution
2014-12-01
calculation of the permanent of a matrix whose dimension is a function of target count [21]. However, the optimal approach for computing the permanent is...presently unclear. The primary objective of this project was to determine the optimal computing strategy(-ies) for the matrix permanent in tactical and...solving various combinatorial problems (see [16] for details and appli- cations to a wide variety of problems) and thus can be applied to compute a
Conversion of a Rhotrix to a "Coupled Matrix"
ERIC Educational Resources Information Center
Sani, B.
2008-01-01
In this note, a method of converting a rhotrix to a special form of matrix termed a "coupled matrix" is proposed. The special matrix can be used to solve various problems involving n x n and (n - 1) x (n - 1) matrices simultaneously.
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin
2018-06-01
The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
A penny-shaped crack in a filament-reinforced matrix. I - The filament model. II - The crack problem
NASA Technical Reports Server (NTRS)
Erdogan, F.; Pacella, A. H.
1974-01-01
The study deals with the elastostatic problem of a penny-shaped crack in an elastic matrix which is reinforced by filaments or fibers perpendicular to the plane of the crack. An elastic filament model is first developed, followed by consideration of the application of the model to the penny-shaped crack problem in which the filaments of finite length are asymmetrically distributed around the crack. Since the primary interest is in the application of the results to studies relating to the fracture of fiber or filament-reinforced composites and reinforced concrete, the main emphasis of the study is on the evaluation of the stress intensity factor along the periphery of the crack, the stresses in the filaments or fibers, and the interface shear between the matrix and the filaments or fibers. Using the filament model developed, the elastostatic interaction problem between a penny-shaped crack and a slender inclusion or filament in an elastic matrix is formulated.
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Matrix and Tensor Completion on a Human Activity Recognition Framework.
Savvaki, Sofia; Tsagkatakis, Grigorios; Panousopoulou, Athanasia; Tsakalides, Panagiotis
2017-11-01
Sensor-based activity recognition is encountered in innumerable applications of the arena of pervasive healthcare and plays a crucial role in biomedical research. Nonetheless, the frequent situation of unobserved measurements impairs the ability of machine learning algorithms to efficiently extract context from raw streams of data. In this paper, we study the problem of accurate estimation of missing multimodal inertial data and we propose a classification framework that considers the reconstruction of subsampled data during the test phase. We introduce the concept of forming the available data streams into low-rank two-dimensional (2-D) and 3-D Hankel structures, and we exploit data redundancies using sophisticated imputation techniques, namely matrix and tensor completion. Moreover, we examine the impact of reconstruction on the classification performance by experimenting with several state-of-the-art classifiers. The system is evaluated with respect to different data structuring scenarios, the volume of data available for reconstruction, and various levels of missing values per device. Finally, the tradeoff between subsampling accuracy and energy conservation in wearable platforms is examined. Our analysis relies on two public datasets containing inertial data, which extend to numerous activities, multiple sensing parameters, and body locations. The results highlight that robust classification accuracy can be achieved through recovery, even for extremely subsampled data streams.
Digital Copy of the Pulkovo Plate Collection
NASA Astrophysics Data System (ADS)
Kanaev, I.; Kanaeva, N.; Poliakow, E.; Pugatch, T.
Report is devoted to a problem of saving of the Pulkovo plate collection. In total more than 50 thousand astronegatives are stored in the observatory. First of them are dated back to 1893. A risk of emulsion corrupting raises with current of time. Since 1996 the operation on digitization and record of the images of plates on electronic media (HDD, CD) are carried out in the observatory. The database ECSIP - Electronic Collection of the Star Images of the Pulkovo is created. There are recorded in it both complete, and extracted (separate areas) images of astronegatives. The plates as a whole are scanned on the photoscanner with rather rough optical resolution 600-2400 dpi. The matrixes with the separate images are digitized on the precision measuring machine "Fantasy" with high (6000-25400 dpi) resolution. The DB ECSIP allows to accept and to store different types of data of a matrix structure, including, CCD-frames. Structure of the ECSIP's software includes systems of visualization, processing and manipulation by the images, and also programs for position and photometric measurements. To the present time more than 40% completed and 10% extracted images from its total amount are digitized and recorded in DB ECSIP. The project is fulfilled at financial support by the Ministry of Science of Russian Federation, grant 01-54 "The coordinate -measuring astrographic machine "Fantasy".
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
NASA Astrophysics Data System (ADS)
Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.
2018-04-01
This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.
Solving large sparse eigenvalue problems on supercomputers
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef
1988-01-01
An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.
Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.
Hu, Yue; Allen, Genevera I
2015-12-01
Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.
VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM
NASA Technical Reports Server (NTRS)
White, J. S.
1994-01-01
VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.
Numerical methods on some structured matrix algebra problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1996-06-01
This proposal concerned the design, analysis, and implementation of serial and parallel algorithms for certain structured matrix algebra problems. It emphasized large order problems and so focused on methods that can be implemented efficiently on distributed-memory MIMD multiprocessors. Such machines supply the computing power and extensive memory demanded by the large order problems. We proposed to examine three classes of matrix algebra problems: the symmetric and nonsymmetric eigenvalue problems (especially the tridiagonal cases) and the solution of linear systems with specially structured coefficient matrices. As all of these are of practical interest, a major goal of this work was tomore » translate our research in linear algebra into useful tools for use by the computational scientists interested in these and related applications. Thus, in addition to software specific to the linear algebra problems, we proposed to produce a programming paradigm and library to aid in the design and implementation of programs for distributed-memory MIMD computers. We now report on our progress on each of the problems and on the programming tools.« less
NASA Technical Reports Server (NTRS)
Dendy, J. E., Jr.
1981-01-01
The black box multigrid (BOXMG) code, which only needs specification of the matrix problem for application in the multigrid method was investigated. It is contended that a major problem with the multigrid method is that each new grid configuration requires a major programming effort to develop a code that specifically handles that grid configuration. The SOR and ICCG methods only specify the matrix problem, no matter what the grid configuration. It is concluded that the BOXMG does everything else necessary to set up the auxiliary coarser problems to achieve a multigrid solution.
Interval-valued distributed preference relation and its application to group decision making
Liu, Yin; Xue, Min; Chang, Wenjun; Yang, Shanlin
2018-01-01
As an important way to help express the preference relation between alternatives, distributed preference relation (DPR) can represent the preferred, non-preferred, indifferent, and uncertain degrees of one alternative over another simultaneously. DPR, however, is unavailable in some situations where a decision maker cannot provide the precise degrees of one alternative over another due to lack of knowledge, experience, and data. In this paper, to address this issue, we propose interval-valued DPR (IDPR) and present its properties of validity and normalization. Through constructing two optimization models, an IDPR matrix is transformed into a score matrix to facilitate the comparison between any two alternatives. The properties of the score matrix are analyzed. To guarantee the rationality of the comparisons between alternatives derived from the score matrix, the additive consistency of the score matrix is developed. In terms of these, IDPR is applied to model and solve multiple criteria group decision making (MCGDM) problem. Particularly, the relationship between the parameters for the consistency of the score matrix associated with each decision maker and those for the consistency of the score matrix associated with the group of decision makers is analyzed. A manager selection problem is investigated to demonstrate the application of IDPRs to MCGDM problems. PMID:29889871
Modelling polarization dependent absorption: The vectorial Lambert-Beer law
NASA Astrophysics Data System (ADS)
Franssens, G.
2014-07-01
The scalar Lambert-Beer law, describing the absorption of unpolarized light travelling through a linear non-scattering medium, is simple, well-known, and mathematically trivial. However, when we take the polarization of light into account and consider a medium with polarization dependent absorption, we now need a Vectorial Lambert-Beer Law (VLBL) to quantify this interaction. Such a generalization of the scalar Lambert-Beer law appears not to be readily available. A careful study of this topic reveals that it is not a trivial problem. We will see that the VLBL is not and cannot be a straightforward vectorized version of its scalar counterpart. The aim of the work is to present the general form of the VLBL and to explain how it arises. A reasonable starting point to derive the VLBL is the Vectorial Radiative Transfer Equation (VRTE), which models the absorption and scattering of (partially) polarized light travelling through a linear medium. When we turn off scattering, the VRTE becomes an infinitesimal model for the VLBL holding in the medium. By integrating this equation, we expect to find the VLBL. Surprisingly, this is not the end of the story. It turns out that light propagation through a medium with polarization-dependent absorption is mathematically not that trivial. The trickiness behind the VLBL can be understood in the following terms. The matrix in the VLBL, relating any input Stokes vector to the corresponding output Stokes vector, must necessarily be a Mueller matrix. The subset of invertible Mueller matrices forms a Lie group. It is known that this Lie group contains the ortho-chronous Lorentz group as a subgroup. The group manifold of this subgroup has a (well-known) non-trivial topology. Consequently, the manifold of the Lie group of Mueller matrices also has (at least the same, but likely a more general) non-trivial topology (the full extent of which is not yet known). The type of non-trivial topology, possessed by the manifold of (invertible) Mueller matrices and which stems from the ortho-chronous Lorentz group, already implies (by a theorem from Lie group theory) that the infinitesimal VRTE model for the VLBL is not guaranteed to produce in general the correct finite model (i.e., the VLBL itself) upon integration. What happens is that the non-trivial topology acts as an obstruction that prevents the (matrix) exponential function to reach the correct Mueller matrix (for the medium at hand), because it is too far away from the identity matrix. This means that, for certain media, the VLBL obtained by integrating the VRTE may be different from the VLBL that one would actually measure. Basically, we have here an example of a physical problem that cannot be completely described by a differential equation! The following more concrete example further illustrates the problem. Imagine a slab of matter, showing polarization dependent absorption but negligible scattering, and consider its Mueller matrix for forward propagating plane waves. Will the measured Mueller matrix of such a slab always have positive determinant? There is no apparent mathematical nor physical reason why this (or any) Mueller matrix must have positive determinant. On the other hand, our VRTE model with scattering turned off will always generate a Mueller matrix with positive determinant. This particular example also presents a nice challenge and opportunity for the experimenter: demonstrate the existence of a medium of the envisioned type having a Mueller matrix with non-positive determinant! Lie group theory not only explains when and why we cannot trust a differential equation, but also offers a way out of such a situation if it arises. Applied to our problem, Lie group theory in addition yields the general form of the VLBL. More details will be given in the presentation.
The KS Method in Light of Generalized Euler Parameters.
1980-01-01
motion for the restricted two-body problem is trans- formed via the Kustaanheimo - Stiefel transformation method (KS) into a dynamical equation in the... Kustaanheimo - Stiefel2 transformation method (KS) in the two-body problem. Many papers have appeared in which specific problems or applications have... TRANSFORMATION MATRIX P. Kustaanheimo and E. Stiefel2 proposed a regularization method by intro- ducing a 4 x 4 transformation matrix and four-component
A Problem-Centered Approach to Canonical Matrix Forms
ERIC Educational Resources Information Center
Sylvestre, Jeremy
2014-01-01
This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…
Matrix differentiation formulas
NASA Technical Reports Server (NTRS)
Usikov, D. A.; Tkhabisimov, D. K.
1983-01-01
A compact differentiation technique (without using indexes) is developed for scalar functions that depend on complex matrix arguments which are combined by operations of complex conjugation, transposition, addition, multiplication, matrix inversion and taking the direct product. The differentiation apparatus is developed in order to simplify the solution of extremum problems of scalar functions of matrix arguments.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less
Breaking Megrelishvili protocol using matrix diagonalization
NASA Astrophysics Data System (ADS)
Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio
2018-03-01
In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.
AP-MALDI Mass Spectrometry Imaging of Gangliosides Using 2,6-Dihydroxyacetophenone
NASA Astrophysics Data System (ADS)
Jackson, Shelley N.; Muller, Ludovic; Roux, Aurelie; Oktem, Berk; Moskovets, Eugene; Doroshenko, Vladimir M.; Woods, Amina S.
2018-03-01
Matrix-assisted laser/desorption ionization (MALDI) mass spectrometry imaging (MSI) is widely used as a unique tool to record the distribution of a large range of biomolecules in tissues. 2,6-Dihydroxyacetophenone (DHA) matrix has been shown to provide efficient ionization of lipids, especially gangliosides. The major drawback for DHA as it applies to MS imaging is that it sublimes under vacuum (low pressure) at the extended time necessary to complete both high spatial and mass resolution MSI studies of whole organs. To overcome the problem of sublimation, we used an atmospheric pressure (AP)-MALDI source to obtain high spatial resolution images of lipids in the brain using a high mass resolution mass spectrometer. Additionally, the advantages of atmospheric pressure and DHA for imaging gangliosides are highlighted. The imaging of [M-H]- and [M-H2O-H]- mass peaks for GD1 gangliosides showed different distribution, most likely reflecting the different spatial distribution of GD1a and GD1b species in the brain. [Figure not available: see fulltext.
Infinite capacity multi-server queue with second optional service channel
NASA Astrophysics Data System (ADS)
Ke, Jau-Chuan; Wu, Chia-Huang; Pearn, Wen Lea
2013-02-01
This paper deals with an infinite-capacity multi-server queueing system with a second optional service (SOS) channel. The inter-arrival times of arriving customers, the service times of the first essential service (FES) and the SOS channel are all exponentially distributed. A customer may leave the system after the FES channel with probability (1-θ), or at the completion of the FES may immediately require a SOS with probability θ (0 <= θ <= 1). The formulae for computing the rate matrix and stationary probabilities are derived by means of a matrix analytical approach. A cost model is developed to determine the optimal values of the number of servers and the two service rates, simultaneously, at the minimal total expected cost per unit time. Quasi-Newton method are employed to deal with the optimization problem. Under optimal operating conditions, numerical results are provided in which several system performance measures are calculated based on assumed numerical values of the system parameters.
Reactive solute transport in an asymmetrical fracture-rock matrix system
NASA Astrophysics Data System (ADS)
Zhou, Renjie; Zhan, Hongbin
2018-02-01
The understanding of reactive solute transport in a single fracture-rock matrix system is the foundation of studying transport behavior in the complex fractured porous media. When transport properties are asymmetrically distributed in the adjacent rock matrixes, reactive solute transport has to be considered as a coupled three-domain problem, which is more complex than the symmetric case with identical transport properties in the adjacent rock matrixes. This study deals with the transport problem in a single fracture-rock matrix system with asymmetrical distribution of transport properties in the rock matrixes. Mathematical models are developed for such a problem under the first-type and the third-type boundary conditions to analyze the spatio-temporal concentration and mass distribution in the fracture and rock matrix with the help of Laplace transform technique and de Hoog numerical inverse Laplace algorithm. The newly acquired solutions are then tested extensively against previous analytical and numerical solutions and are proven to be robust and accurate. Furthermore, a water flushing phase is imposed on the left boundary of system after a certain time. The diffusive mass exchange along the fracture/rock matrixes interfaces and the relative masses stored in each of three domains (fracture, upper rock matrix, and lower rock matrix) after the water flushing provide great insights of transport with asymmetric distribution of transport properties. This study has the following findings: 1) Asymmetric distribution of transport properties imposes greater controls on solute transport in the rock matrixes. However, transport in the fracture is mildly influenced. 2) The mass stored in the fracture responses quickly to water flushing, while the mass stored in the rock matrix is much less sensitive to the water flushing. 3) The diffusive mass exchange during the water flushing phase has similar patterns under symmetric and asymmetric cases. 4) The characteristic distance which refers to the zero diffusion between the fracture and the rock matrix during the water flushing phase is closely associated with dispersive process in the fracture.
Micromechanical modeling of damage growth in titanium based metal-matrix composites
NASA Technical Reports Server (NTRS)
Sherwood, James A.; Quimby, Howard M.
1994-01-01
The thermomechanical behavior of continuous-fiber reinforced titanium based metal-matrix composites (MMC) is studied using the finite element method. A thermoviscoplastic unified state variable constitutive theory is employed to capture inelastic and strain-rate sensitive behavior in the Timetal-21s matrix. The SCS-6 fibers are modeled as thermoplastic. The effects of residual stresses generated during the consolidation process on the tensile response of the composites are investigated. Unidirectional and cross-ply geometries are considered. Differences between the tensile responses in composites with perfectly bonded and completely debonded fiber/matrix interfaces are discussed. Model simulations for the completely debonded-interface condition are shown to correlate well with experimental results.
Transferring elements of a density matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allahverdyan, Armen E.; Hovhannisyan, Karen V.; Yerevan State University, A. Manoogian Street 1, Yerevan
2010-01-15
We study restrictions imposed by quantum mechanics on the process of matrix-element transfer. This problem is at the core of quantum measurements and state transfer. Given two systems A and B with initial density matrices lambda and r, respectively, we consider interactions that lead to transferring certain matrix elements of unknown lambda into those of the final state r-tilde of B. We find that this process eliminates the memory on the transferred (or certain other) matrix elements from the final state of A. If one diagonal matrix element is transferred, r(tilde sign){sub aa}=lambda{sub aa}, the memory on each nondiagonal elementmore » lambda{sub an}ot ={sub b} is completely eliminated from the final density operator of A. Consider the following three quantities, Relambda{sub an}ot ={sub b}, Imlambda{sub an}ot ={sub b}, and lambda{sub aa}-lambda{sub bb} (the real and imaginary part of a nondiagonal element and the corresponding difference between diagonal elements). Transferring one of them, e.g., Rer(tilde sign){sub an}ot ={sub b}=Relambda{sub an}ot ={sub b}, erases the memory on two others from the final state of A. Generalization of these setups to a finite-accuracy transfer brings in a trade-off between the accuracy and the amount of preserved memory. This trade-off is expressed via system-independent uncertainty relations that account for local aspects of the accuracy-disturbance trade-off in quantum measurements. Thus, the general aspect of state disturbance in quantum measurements is elimination of memory on non-diagonal elements, rather than diagonalization.« less
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Hollaus, K; Magele, C; Merwa, R; Scharfetter, H
2004-02-01
Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
ERIC Educational Resources Information Center
Van Deun, Katrijn; Heiser, Willem J.; Delbeke, Luc
2007-01-01
A multidimensional unfolding technique that is not prone to degenerate solutions and is based on multidimensional scaling of a complete data matrix is proposed: distance information about the unfolding data and about the distances both among judges and among objects is included in the complete matrix. The latter information is derived from the…
ERIC Educational Resources Information Center
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-01-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's…
Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.
ERIC Educational Resources Information Center
Knol, Dirk L.; ten Berge, Jos M. F.
1989-01-01
An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…
Matrix Management in DoD: An Annotated Bibliography
1984-04-01
ADDRESS 10 PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS ACSC/EDCC, MAXWELL AFB AL 36112 1 1. CONTROLLING OFFICE NAME AND ADDRESS 12 ...completes their message that matrix orga- nization is the likely format of the multiprogram Program Office. 12 The text’s discussion of matrix is...manager, and functional specialist are of vital importance to the effective operation of the matrix .... Matrix management will not achieve its
Development of a Problem-Based Learning Matrix for Data Collection
ERIC Educational Resources Information Center
Sipes, Shannon M.
2017-01-01
Few of the papers published in journals and conference proceedings on problem-based learning (PBL) are empirical studies, and most of these use self-report as the measure of PBL (Beddoes, Jesiek, & Borrego, 2010). The current study provides a theoretically derived matrix for coding and classifying PBL that was objectively applied to official…
Discrepancy Analysis and Continuity Matrix: Tools for Measuring the Impact of Inservice Training.
ERIC Educational Resources Information Center
Kite, R. Hayman
Within an inservice training program there is a functional interdependent relationship among problems, causes, and solutions. During a sequence of eight steps to ascertain program impact, a "continuity matrix", a management technique that assists in dealing with the problem/solution paradox is created. A successful training program must: (1) aim…
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich; Stahl, Stephanie
2008-01-01
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30x30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation…
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Fracture of a Brittle-Particle Ductile Matrix Composite with Applications to a Coating System
NASA Astrophysics Data System (ADS)
Bianculli, Steven J.
In material systems consisting of hard second phase particles in a ductile matrix, failure initiating from cracking of the second phase particles is an important failure mechanism. This dissertation applies the principles of fracture mechanics to consider this problem, first from the standpoint of fracture of the particles, and then the onset of crack propagation from fractured particles. This research was inspired by the observation of the failure mechanism of a commercial zinc-based anti-corrosion coating and the analysis was initially approached as coatings problem. As the work progressed it became evident that failure mechanism was relevant to a broad range of composite material systems and research approach was generalized to consider failure of a system consisting of ellipsoidal second phase particles in a ductile matrix. The starting point for the analysis is the classical Eshelby Problem, which considered stress transfer from the matrix to an ellipsoidal inclusion. The particle fracture problem is approached by considering cracks within particles and how they are affected by the particle/matrix interface, the difference in properties between the particle and matrix, and by particle shape. These effects are mapped out for a wide range of material combinations. The trends developed show that, although the particle fracture problem is very complex, the potential for fracture among a range of particle shapes can, for certain ranges in particle shape, be considered easily on the basis of the Eshelby Stress alone. Additionally, the evaluation of cracks near the curved particle/matrix interface adds to the existing body of work of cracks approaching bi-material interfaces in layered material systems. The onset of crack propagation from fractured particles is then considered as a function of particle shape and mismatch in material properties between the particle and matrix. This behavior is mapped out for a wide range of material combinations. The final section of this dissertation qualitatively considers an approach to determine critical particle sizes, below which crack propagation will not occur for a coating system that exhibited stable cracks in an interfacial layer between the coating and substrate.
Computing singularities of perturbation series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvaal, Simen; Jarlebring, Elias; Michiels, Wim
2011-03-15
Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less
Evaluating the interpersonal content of the MMPI-2-RF Interpersonal Scales.
Ayearst, Lindsay E; Sellbom, Martin; Trobst, Krista K; Bagby, R Michael
2013-01-01
Convergence between the MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) interpersonal scales and 2 interpersonal circumplex (IPC) measures was examined. University students (N = 405) completed the MMPI-2 and 2 IPC measures, the Interpersonal Adjectives Scales Revised Big Five Version (IASR-B5; Trapnell & Wiggins, 1990) and the Inventory of Interpersonal Problems Circumplex (IIP-C; Horowitz, Alden, Wiggins, & Pincus, 2000). Internal consistency was adequate for 3 of the 6 scales investigated. The majority of scales were located in their hypothesized locations, although magnitude of correlations was somewhat weaker than anticipated, partly owing to restricted range from using a healthy sample. The expected pattern of correlations that defines a circular matrix was demonstrated, lending support for the convergent and discriminant validity of the MMPI-2-RF interpersonal scales with respect to the assessment of interpersonal traits and problems.
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Allen, David H.
1993-01-01
The uniaxial response of a continuous fiber elastic-perfectly plastic composite is modeled herein as a two-element composite cylinder. An axisymmetric analytical micromechanics solution is obtained for the rate-independent elastic-plastic response of the two-element composite cylinder subjected to tensile loading in the fiber direction for the case wherein the core fiber is assumed to be a transversely isotropic elastic-plastic material obeying the Tsai-Hill yield criterion, with yielding simulating fiber failure. The matrix is assumed to be an isotropic elastic-plastic material obeying the Tresca yield criterion. It is found that there are three different circumstances that depend on the fiber and matrix properties: fiber yield, followed by matrix yielding; complete matrix yield, followed by fiber yielding; and partial matrix yield, followed by fiber yielding, followed by complete matrix yield. The order in which these phenomena occur is shown to have a pronounced effect on the predicted uniaxial effective composite response.
ISS method for coordination control of nonlinear dynamical agents under directed topology.
Wang, Xiangke; Qin, Jiahu; Yu, Changbin
2014-10-01
The problems of coordination of multiagent systems with second-order locally Lipschitz continuous nonlinear dynamics under directed interaction topology are investigated in this paper. A completely nonlinear input-to-state stability (ISS)-based framework, drawing on ISS methods, with the aid of results from graph theory, matrix theory, and the ISS cyclic-small-gain theorem, is proposed for the coordination problem under directed topology, which can effectively tackle the technical challenges caused by locally Lipschitz continuous dynamics. Two coordination problems, i.e., flocking with a virtual leader and containment control, are considered. For both problems, it is assumed that only a portion of the agents can obtain the information from the leader(s). For the first problem, the proposed strategy is shown effective in driving a group of nonlinear dynamical agents reach the prespecified geometric pattern under the condition that at least one agent in each strongly connected component of the information-interconnection digraph with zero in-degree has access to the state information of the virtual leader; and the strategy proposed for the second problem can guarantee the nonlinear dynamical agents moving to the convex hull spanned by the positions of multiple leaders under the condition that for each agent there exists at least one leader that has a directed path to this agent.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Generating probabilistic Boolean networks from a prescribed transition probability matrix.
Ching, W-K; Chen, X; Tsing, N-K
2009-11-01
Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Tongsong, E-mail: jiangtongsong@sina.com; Department of Mathematics, Heze University, Heze, Shandong 274015; Jiang, Ziwu
In the study of the relation between complexified classical and non-Hermitian quantum mechanics, physicists found that there are links to quaternionic and split quaternionic mechanics, and this leads to the possibility of employing algebraic techniques of split quaternions to tackle some problems in complexified classical and quantum mechanics. This paper, by means of real representation of a split quaternion matrix, studies the problem of diagonalization of a split quaternion matrix and gives algebraic techniques for diagonalization of split quaternion matrices in split quaternionic mechanics.
Gaussian entanglement revisited
NASA Astrophysics Data System (ADS)
Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo
2018-02-01
We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.
Covariance, correlation matrix, and the multiscale community structure of networks.
Shen, Hua-Wei; Cheng, Xue-Qi; Fang, Bin-Xing
2010-07-01
Empirical studies show that real world networks often exhibit multiple scales of topological descriptions. However, it is still an open problem how to identify the intrinsic multiple scales of networks. In this paper, we consider detecting the multiscale community structure of network from the perspective of dimension reduction. According to this perspective, a covariance matrix of network is defined to uncover the multiscale community structure through the translation and rotation transformations. It is proved that the covariance matrix is the unbiased version of the well-known modularity matrix. We then point out that the translation and rotation transformations fail to deal with the heterogeneous network, which is very common in nature and society. To address this problem, a correlation matrix is proposed through introducing the rescaling transformation into the covariance matrix. Extensive tests on real world and artificial networks demonstrate that the correlation matrix significantly outperforms the covariance matrix, identically the modularity matrix, as regards identifying the multiscale community structure of network. This work provides a novel perspective to the identification of community structure and thus various dimension reduction methods might be used for the identification of community structure. Through introducing the correlation matrix, we further conclude that the rescaling transformation is crucial to identify the multiscale community structure of network, as well as the translation and rotation transformations.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
Preconditioned conjugate residual methods for the solution of spectral equations
NASA Technical Reports Server (NTRS)
Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.
1986-01-01
Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.
NASA Astrophysics Data System (ADS)
Tanemura, M.; Chida, Y.
2016-09-01
There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Li, Zeng; Fei, Hao; Wang, Zhen; Zhu, Tianyi
2017-09-01
Full‑thickness and large area defects of articular cartilage are unable to completely repair themselves and require surgical intervention, including microfracture, autologous or allogeneic osteochondral grafts, and autologous chondrocyte implantation. A large proportion of regenerative cartilage exists as fibrocartilage, which is unable to withstand impacts in the same way as native hyaline cartilage, owing to excess synthesis of type I collagen in the matrix. The present study demonstrated that low‑dose halofuginone (HF), a plant alkaloid isolated from Dichroa febrifuga, may inhibit the synthesis of type I collagen without influencing type II collagen in the extracellular matrix of chondrocytes. In addition, HF was revealed to inhibit the phosphorylation of mothers against decapentaplegic homolog (Smad)2/3 and promoted Smad7 expression, as well as decrease the synthesis of type I collagen synthesis. Results from the present study indicated that HF treatment suppressed the synthesis of type I collagen by inhibiting the transforming growth factor‑β signaling pathway in chondrocytes. These results may provide an alternative solution to the problems associated with fibrocartilage, and convert fibrocartilage into hyaline cartilage at the mid‑early stages of cartilage regeneration. HF may additionally be used to improve monolayer expansion or 3D cultures of seed cells for the tissue engineering of cartilage.
Analysis of cocaine/crack biomarkers in meconium by LC-MS.
D'Avila, Felipe Bianchini; Ferreira, Pâmela C Lukasewicz; Salazar, Fernanda Rodrigues; Pereira, Andrea Garcia; Santos, Maíra Kerpel Dos; Pechansky, Flavio; Limberger, Renata Pereira; Fröehlich, Pedro Eduardo
2016-02-15
Fetal exposure to illicit drugs is a worldwide problem, since many addicted women do not stop using it during pregnancy. Cocaine consumed in powdered (snorted or injected) or smoked (crack cocaine) form are harmful for the baby and its side effects are not completely known. Meconium, the first stool of a newborn, is a precious matrix usually discarded, that may contain amounts of substances consumed in the last two trimesters of pregnancy. Analyzing this biological matrix it is possible to detect the unaltered molecule of cocaine (COC) or its metabolite benzoylecgonine (BZE) and pyrolytic products anhydroecgonine methyl ester (AEME) and anhydroecgonine (AEC). A liquid chromatography mass spectrometry (LC-MS) method was validated for meconium samples after solvent extraction, followed by direct injection of 10μL. Linearity covered a concentration range of 15 to 500ng/mg with a lower limit of quantification (LLOQ) of 15ng/mg for all analytes. Matrix effect was evaluated and showed adequate results. Detection of illicit substances usage can be crucial for the baby, since knowing that can help provide medical care as fast as possible. The method proved to be simple and fast, and was applied to 17 real meconium samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.
2015-12-01
We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.
Density matrix reconstruction of a large angular momentum
NASA Astrophysics Data System (ADS)
Klose, Gerd
2001-10-01
A complete description of the quantum state of a physical system is the fundamental knowledge necessary to statistically predict the outcome of measurements. In turning this statement around, Wolfgang Pauli raised already in 1933 the question, whether an unknown quantum state could be uniquely determined by appropriate measurements-a problem that has gained new relevance in recent years. In order to harness the prospects of quantum computing, secure communication, teleportation, and the like, the development of techniques to accurately control and measure quantum states has now become a matter of practical as well as fundamental interest. However, there is no general answer to Pauli's very basic question, and quantum state reconstruction algorithms have been developed and experimentally demonstrated only for a few systems so far. This thesis presents a novel experimental method to measure the unknown and generally mixed quantum state for an angular momentum of arbitrary magnitude. The (2F + 1) x (2F + 1) density matrix describing the quantum state is hereby completely determined from a set of Stern-Gerlach measurements with (4F + 1) different orientations of the quantization axis. This protocol is implemented for laser cooled Cesium atoms in the 6S1/2(F = 4) hyperfine ground state manifold, and is applied to a number of test states prepared by optical pumping and Larmor precession. A comparison of the input and the measured states shows successful reconstructions with fidelities of about 0.95.
Large-scale production and isolation of Candida biofilm extracellular matrix.
Zarnowski, Robert; Sanchez, Hiram; Andes, David R
2016-12-01
The extracellular matrix of biofilm is unique to the biofilm lifestyle, and it has key roles in community survival. A complete understanding of the biochemical nature of the matrix is integral to the understanding of the roles of matrix components. This knowledge is a first step toward the development of novel therapeutics and diagnostics to address persistent biofilm infections. Many of the assay methods needed for refined matrix composition analysis require milligram amounts of material that is separated from the cellular components of these complex communities. The protocol described here explains the large-scale production and isolation of the Candida biofilm extracellular matrix. To our knowledge, the proposed procedure is the only currently available approach in the field that yields milligram amounts of biofilm matrix. This procedure first requires biofilms to be seeded in large-surface-area roller bottles, followed by cell adhesion and biofilm maturation during continuous movement of the medium across the surface of the rotating bottle. The formed matrix is then separated from the entire biomass using sonication, which efficiently removes the matrix without perturbing the fungal cell wall. Subsequent filtration, dialysis and lyophilization steps result in a purified matrix product sufficient for biochemical, structural and functional assays. The overall protocol takes ∼11 d to complete. This protocol has been used for Candida species, but, using the troubleshooting guide provided, it could be adapted for other fungi or bacteria.
NASA Astrophysics Data System (ADS)
Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad; Aghamohammadi, Amir
2011-04-01
A complete classification is given for one-dimensional chains with nearest-neighbor interactions having two states in each site, for which a matrix product ground state exists. The Hamiltonians and their corresponding matrix product ground states are explicitly obtained.
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion
NASA Astrophysics Data System (ADS)
Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong
This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.
Kaufmann, Anton; Butcher, Patrick
2006-01-01
Liquid chromatography coupled to orthogonal acceleration time-of-flight mass spectrometry (LC/TOF) provides an attractive alternative to liquid chromatography coupled to triple quadrupole mass spectrometry (LC/MS/MS) in the field of multiresidue analysis. The sensitivity and selectivity of LC/TOF approach those of LC/MS/MS. TOF provides accurate mass information and a significantly higher mass resolution than quadrupole analyzers. The available mass resolution of commercial TOF instruments ranging from 10 000 to 18 000 full width at half maximum (FWHM) is not, however, sufficient to completely exclude the problem of isobaric interferences (co-elution of analyte ions with matrix compounds of very similar mass). Due to the required data storage capacity, TOF raw data is commonly centroided before being electronically stored. However, centroiding can lead to a loss of data quality. The co-elution of a low intensity analyte peak with an isobaric, high intensity matrix compound can cause problems. Some centroiding algorithms might not be capable of deconvoluting such partially merged signals, leading to incorrect centroids.Co-elution of isobaric compounds has been deliberately simulated by injecting diluted binary mixtures of isobaric model substances at various relative intensities. Depending on the mass differences between the two isobaric compounds and the resolution provided by the TOF instrument, significant deviations in exact mass measurements and signal intensities were observed. The extraction of a reconstructed ion chromatogram based on very narrow mass windows can even result in the complete loss of the analyte signal. Guidelines have been proposed to avoid such problems. The use of sub-2 microm HPLC packing materials is recommended to improve chromatographic resolution and to reduce the risk of co-elution. The width of the extraction mass windows for reconstructed ion chromatograms should be defined according to the resolution of the TOF instrument. Alternative approaches include the spiking of the sample with appropriate analyte concentrations. Furthermore, enhanced software, capable of deconvoluting partially merged mass peaks, may become available. Copyright (c) 2006 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Raymond, C.; Smrekar, S.; Millbury, C.
2004-01-01
This viewgraph presentation reviews a Bayesian approach to the inversion of gravity and magnetic data with specific application to the Ismenius Area of Mars. Many inverse problems encountered in geophysics and planetary science are well known to be non-unique (i.e. inversion of gravity the density structure of a body). In hopes of reducing the non-uniqueness of solutions, there has been interest in the joint analysis of data. An example is the joint inversion of gravity and magnetic data, with the assumption that the same physical anomalies generate both the observed magnetic and gravitational anomalies. In this talk, we formulate the joint analysis of different types of data in a Bayesian framework and apply the formalism to the inference of the density and remanent magnetization structure for a local region in the Ismenius area of Mars. The Bayesian approach allows prior information or constraints in the solutions to be incorporated in the inversion, with the "best" solutions those whose forward predictions most closely match the data while remaining consistent with assumed constraints. The application of this framework to the inversion of gravity and magnetic data on Mars reveals two typical challenges - the forward predictions of the data have a linear dependence on some of the quantities of interest, and non-linear dependence on others (termed the "linear" and "non-linear" variables, respectively). For observations with Gaussian noise, a Bayesian approach to inversion for "linear" variables reduces to a linear filtering problem, with an explicitly computable "error" matrix. However, for models whose forward predictions have non-linear dependencies, inference is no longer given by such a simple linear problem, and moreover, the uncertainty in the solution is no longer completely specified by a computable "error matrix". It is therefore important to develop methods for sampling from the full Bayesian posterior to provide a complete and statistically consistent picture of model uncertainty, and what has been learned from observations. We will discuss advanced numerical techniques, including Monte Carlo Markov
Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1982-01-01
A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.
Effective matrix-free preconditioning for the augmented immersed interface method
NASA Astrophysics Data System (ADS)
Xia, Jianlin; Li, Zhilin; Ye, Xin
2015-12-01
We present effective and efficient matrix-free preconditioning techniques for the augmented immersed interface method (AIIM). AIIM has been developed recently and is shown to be very effective for interface problems and problems on irregular domains. GMRES is often used to solve for the augmented variable(s) associated with a Schur complement A in AIIM that is defined along the interface or the irregular boundary. The efficiency of AIIM relies on how quickly the system for A can be solved. For some applications, there are substantial difficulties involved, such as the slow convergence of GMRES (particularly for free boundary and moving interface problems), and the inconvenience in finding a preconditioner (due to the situation that only the products of A and vectors are available). Here, we propose matrix-free structured preconditioning techniques for AIIM via adaptive randomized sampling, using only the products of A and vectors to construct a hierarchically semiseparable matrix approximation to A. Several improvements over existing schemes are shown so as to enhance the efficiency and also avoid potential instability. The significance of the preconditioners includes: (1) they do not require the entries of A or the multiplication of AT with vectors; (2) constructing the preconditioners needs only O (log N) matrix-vector products and O (N) storage, where N is the size of A; (3) applying the preconditioners needs only O (N) flops; (4) they are very flexible and do not require any a priori knowledge of the structure of A. The preconditioners are observed to significantly accelerate the convergence of GMRES, with heuristical justifications of the effectiveness. Comprehensive tests on several important applications are provided, such as Navier-Stokes equations on irregular domains with traction boundary conditions, interface problems in incompressible flows, mixed boundary problems, and free boundary problems. The preconditioning techniques are also useful for several other problems and methods.
Asada, Takumi; Yoshihara, Naoki; Ochiai, Yasushi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru
2018-04-25
Water-soluble polymers with high viscosity are frequently used in the design of sustained-release formulations of poorly water-soluble drugs to enable complete release of the drug in the gastrointestinal tract. Tablets containing matrix granules with a water-soluble polymer are preferred because tablets are easier to handle and the multiple drug-release units of the matrix granules decreases the influences of the physiological environment on the drug. However, matrix granules with a particle size of over 800 μm sometimes cause a content uniformity problem in the tableting process because of the large particle size. An effective method of manufacturing controlled-release matrix granules with a smaller particle size is desired. The aim of this study was to develop tablets containing matrix granules with a smaller size and good controlled-release properties, using phenytoin as a model poorly water-soluble drug. We adapted the recently developed hollow spherical granule granulation technology, using water-soluble polymers with different viscosities. The prepared granules had an average particle size of 300 μm and sharp particle size distribution (relative width: 0.52-0.64). The values for the particle strength of the granules were 1.86-1.97 N/mm 2 , and the dissolution profiles of the granules were not affected by the tableting process. The dissolution profiles and the blood concentration levels of drug released from the granules depended on the viscosity of the polymer contained in the granules. We succeeded in developing the desired controlled-release granules, and this study should be valuable in the development of sustained-release formulations of poorly water-soluble drugs. Copyright © 2018 Elsevier B.V. All rights reserved.
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.
Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong
2016-12-01
Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
Drábek, Jiří
2016-01-01
In this paper I tested whether Contradictory Matrix with 40 Inventive Principles, the simplest instrument from the Theory of Inventive Problem Solving (TRIZ), is a useful approach to a real-life PCR scenario. The PCR challenge consisted of standardization of fluorescence melting curve measurements in Competitive Amplification of Differentially Melting Amplicons (CADMA) PCR for multiple targets. Here I describe my way of using the TRIZ Matrix to generate seven alternative solutions from which I can choose the successful solution, consisting of repeated cycles of amplification and melting in a single PCR run.
Matrix Transfer Function Design for Flexible Structures: An Application
NASA Technical Reports Server (NTRS)
Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.
1985-01-01
The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.
NASA Astrophysics Data System (ADS)
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
Vibration analysis of rotor blades with an attached concentrated mass
NASA Technical Reports Server (NTRS)
Murthy, V. R.; Barna, P. S.
1977-01-01
The effect of an attached concentrated mass on the dynamics of helicopter rotor blades is determined. The point transmission matrix method was used to define, through three completely automated computer programs, the natural vibrational characteristics (natural frequencies and mode shapes) of rotor blades. The problems of coupled flapwise bending, chordwise bending, and torsional vibration of a twisted nonuniform blade and its special subcase pure torsional vibration are discussed. The orthogonality relations that exist between the natural modes of rotor blades with an attached concentrated mass are derived. The effect of pitch, rotation, and point mass parameters on the collective, cyclic, scissor, and pure torsional modes of a seesaw rotor blade is determined.
Completing the Results of the 2013 Boston Marathon
Hammerling, Dorit; Cefalu, Matthew; Cisewski, Jessi; Dominici, Francesca; Parmigiani, Giovanni; Paulson, Charles; Smith, Richard L.
2014-01-01
The 2013 Boston marathon was disrupted by two bombs placed near the finish line. The bombs resulted in three deaths and several hundred injuries. Of lesser concern, in the immediate aftermath, was the fact that nearly 6,000 runners failed to finish the race. We were approached by the marathon's organizers, the Boston Athletic Association (BAA), and asked to recommend a procedure for projecting finish times for the runners who could not complete the race. With assistance from the BAA, we created a dataset consisting of all the runners in the 2013 race who reached the halfway point but failed to finish, as well as all runners from the 2010 and 2011 Boston marathons. The data consist of split times from each of the 5 km sections of the course, as well as the final 2.2 km (from 40 km to the finish). The statistical objective is to predict the missing split times for the runners who failed to finish in 2013. We set this problem in the context of the matrix completion problem, examples of which include imputing missing data in DNA microarray experiments, and the Netflix prize problem. We propose five prediction methods and create a validation dataset to measure their performance by mean squared error and other measures. The best method used local regression based on a K-nearest-neighbors algorithm (KNN method), though several other methods produced results of similar quality. We show how the results were used to create projected times for the 2013 runners and discuss potential for future application of the same methodology. We present the whole project as an example of reproducible research, in that we are able to make the full data and all the algorithms we have used publicly available, which may facilitate future research extending the methods or proposing completely different approaches. PMID:24727904
Completing the results of the 2013 Boston marathon.
Hammerling, Dorit; Cefalu, Matthew; Cisewski, Jessi; Dominici, Francesca; Parmigiani, Giovanni; Paulson, Charles; Smith, Richard L
2014-01-01
The 2013 Boston marathon was disrupted by two bombs placed near the finish line. The bombs resulted in three deaths and several hundred injuries. Of lesser concern, in the immediate aftermath, was the fact that nearly 6,000 runners failed to finish the race. We were approached by the marathon's organizers, the Boston Athletic Association (BAA), and asked to recommend a procedure for projecting finish times for the runners who could not complete the race. With assistance from the BAA, we created a dataset consisting of all the runners in the 2013 race who reached the halfway point but failed to finish, as well as all runners from the 2010 and 2011 Boston marathons. The data consist of split times from each of the 5 km sections of the course, as well as the final 2.2 km (from 40 km to the finish). The statistical objective is to predict the missing split times for the runners who failed to finish in 2013. We set this problem in the context of the matrix completion problem, examples of which include imputing missing data in DNA microarray experiments, and the Netflix prize problem. We propose five prediction methods and create a validation dataset to measure their performance by mean squared error and other measures. The best method used local regression based on a K-nearest-neighbors algorithm (KNN method), though several other methods produced results of similar quality. We show how the results were used to create projected times for the 2013 runners and discuss potential for future application of the same methodology. We present the whole project as an example of reproducible research, in that we are able to make the full data and all the algorithms we have used publicly available, which may facilitate future research extending the methods or proposing completely different approaches.
Kwon, Jae-Sung; Oh, Duck-Won
2015-06-01
The purpose of this study was to demonstrate the use of task-based cognitive tests to detect potential problems in the assessment of work training for vocational rehabilitation. Eleven participants with a normal range of cognitive functioning scores were recruited for this study. Participants were all trainees who participated in a vocational training program. The Rey Complex Figure Test and the Allen Cognitive Level Screen were randomly administered to all participants. Responses to the tests were qualitatively analyzed with matrix and scatter charts. Observational outcomes derived from the tests indicated that response errors, distortions, and behavioral problems occurred in most participants. These factors may impede occupational performance despite normal cognitive function. These findings suggest that the use of task-based tests may be beneficial for detecting potential problems associated with the work performance of people with disabilities. Specific analysis using the task-based tests may be necessary to complete the decision-making process for vocational aptness. Furthermore, testing should be led by professionals with a higher specialization in this field.
Algorithms for Solvents and Spectral Factors of Matrix Polynomials
1981-01-01
spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Rotundo, Roberto; Pini-Prato, Giovanpaolo
2012-08-01
The aim of this case report study was to demonstrate the use of a new collagen matrix as an alternative to the connective tissue graft for the treatment of multiple gingival recessions. Three women showing 11 maxillary gingival recessions were treated by means of the envelope flap technique associated with a novel collagen matrix as a substitute for the connective tissue graft. At 1 year, complete root coverage was achieved in 9 treated sites, with a mean keratinized tissue width of 3.1 mm, complete resolution of dental hypersensitivity, and a high level of esthetic satisfaction.
A systematic approach for locating optimum sites
Angel Ramos; Isabel Otero
1979-01-01
The basic information collected for landscape planning studies may be given the form of a "s x m" matrix, where s is the number of landscape units and m the number of data gathered for each unit. The problem of finding the optimum location for a given project is translated in the problem of ranking the series of vectors in the matrix which represent landscape...
NASA Astrophysics Data System (ADS)
Grünbaum, F. A.; Pacharoni, I.; Zurrián, I.
2017-02-01
The problem of recovering a signal of finite duration from a piece of its Fourier transform was solved at Bell Labs in the 1960’s, by exploiting a ‘miracle’: a certain naturally appearing integral operator commutes with an explicit differential one. Here we show that this same miracle holds in a matrix valued version of the same problem.
The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix
NASA Astrophysics Data System (ADS)
Antipov, Yuri A.
2014-10-01
A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
The ab-initio density matrix renormalization group in practice.
Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic
2015-01-21
The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.
Graph theory approach to the eigenvalue problem of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.; Bainum, P. M.
1981-01-01
Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.
Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.
Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen
2016-02-01
The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.
Molnár, Bálint; Aroca, Sofia; Keglevich, Tibor; Gera, István; Windisch, Péter; Stavropoulos, Andreas; Sculean, Anton
2013-01-01
To clinically evaluate the treatment of Miller Class I and II multiple adjacent gingival recessions using the modified coronally advanced tunnel technique combined with a newly developed bioresorbable collagen matrix of porcine origin. Eight healthy patients exhibiting at least three multiple Miller Class I and II multiple adjacent gingival recessions (a total of 42 recessions) were consecutively treated by means of the modified coronally advanced tunnel technique and collagen matrix. The following clinical parameters were assessed at baseline and 12 months postoperatively: full mouth plaque score (FMPS), full mouth bleeding score (FMBS), probing depth (PD), recession depth (RD), recession width (RW), keratinized tissue thickness (KTT), and keratinized tissue width (KTW). The primary outcome variable was complete root coverage. Neither allergic reactions nor soft tissue irritations or matrix exfoliations occurred. Postoperative pain and discomfort were reported to be low, and patient acceptance was generally high. At 12 months, complete root coverage was obtained in 2 out of the 8 patients and 30 of the 42 recessions (71%). Within their limits, the present results indicate that treatment of Miller Class I and II multiple adjacent gingival recessions by means of the modified coronally advanced tunnel technique and collagen matrix may result in statistically and clinically significant complete root coverage. Further studies are warranted to evaluate the performance of collagen matrix compared with connective tissue grafts and other soft tissue grafts.
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Peak picking NMR spectral data using non-negative matrix factorization.
Tikole, Suhas; Jaravine, Victor; Rogov, Vladimir; Dötsch, Volker; Güntert, Peter
2014-02-11
Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Exact sampling of graphs with prescribed degree correlations
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Del Genio, Charo I.; Erdős, Péter L.; Miklós, István; Toroczkai, Zoltán
2015-08-01
Many real-world networks exhibit correlations between the node degrees. For instance, in social networks nodes tend to connect to nodes of similar degree and conversely, in biological and technological networks, high-degree nodes tend to be linked with low-degree nodes. Degree correlations also affect the dynamics of processes supported by a network structure, such as the spread of opinions or epidemics. The proper modelling of these systems, i.e., without uncontrolled biases, requires the sampling of networks with a specified set of constraints. We present a solution to the sampling problem when the constraints imposed are the degree correlations. In particular, we develop an exact method to construct and sample graphs with a specified joint-degree matrix, which is a matrix providing the number of edges between all the sets of nodes of a given degree, for all degrees, thus completely specifying all pairwise degree correlations, and additionally, the degree sequence itself. Our algorithm always produces independent samples without backtracking. The complexity of the graph construction algorithm is {O}({NM}) where N is the number of nodes and M is the number of edges.
Solution of matrix equations using sparse techniques
NASA Technical Reports Server (NTRS)
Baddourah, Majdi
1994-01-01
The solution of large systems of matrix equations is key to the solution of a large number of scientific and engineering problems. This talk describes the sparse matrix solver developed at Langley which can routinely solve in excess of 263,000 equations in 40 seconds on one Cray C-90 processor. It appears that for large scale structural analysis applications, sparse matrix methods have a significant performance advantage over other methods.
NASA Technical Reports Server (NTRS)
Maliassov, Serguei
1996-01-01
In this paper an algebraic substructuring preconditioner is considered for nonconforming finite element approximations of second order elliptic problems in 3D domains with a piecewise constant diffusion coefficient. Using a substructuring idea and a block Gauss elimination, part of the unknowns is eliminated and the Schur complement obtained is preconditioned by a spectrally equivalent very sparse matrix. In the case of quasiuniform tetrahedral mesh an appropriate algebraic multigrid solver can be used to solve the problem with this matrix. Explicit estimates of condition numbers and implementation algorithms are established for the constructed preconditioner. It is shown that the condition number of the preconditioned matrix does not depend on either the mesh step size or the jump of the coefficient. Finally, numerical experiments are presented to illustrate the theory being developed.
Constraints on scattering amplitudes in multistate Landau-Zener theory
NASA Astrophysics Data System (ADS)
Sinitsyn, Nikolai A.; Lin, Jeffmin; Chernyak, Vladimir Y.
2017-01-01
We derive a set of constraints, which we will call hierarchy constraints, on scattering amplitudes of an arbitrary multistate Landau-Zener model (MLZM). The presence of additional symmetries can transform such constraints into nontrivial relations between elements of the transition probability matrix. This observation can be used to derive complete solutions of some MLZMs or, for models that cannot be solved completely, to reduce the number of independent elements of the transition probability matrix.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Fast and stable algorithms for computing the principal square root of a complex matrix
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.
1987-01-01
This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Polar decomposition for attitude determination from vector observations
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1993-01-01
This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
Exact algorithms for haplotype assembly from whole-genome sequence data.
Chen, Zhi-Zhong; Deng, Fei; Wang, Lusheng
2013-08-15
Haplotypes play a crucial role in genetic analysis and have many applications such as gene disease diagnoses, association studies, ancestry inference and so forth. The development of DNA sequencing technologies makes it possible to obtain haplotypes from a set of aligned reads originated from both copies of a chromosome of a single individual. This approach is often known as haplotype assembly. Exact algorithms that can give optimal solutions to the haplotype assembly problem are highly demanded. Unfortunately, previous algorithms for this problem either fail to output optimal solutions or take too long time even executed on a PC cluster. We develop an approach to finding optimal solutions for the haplotype assembly problem under the minimum-error-correction (MEC) model. Most of the previous approaches assume that the columns in the input matrix correspond to (putative) heterozygous sites. This all-heterozygous assumption is correct for most columns, but it may be incorrect for a small number of columns. In this article, we consider the MEC model with or without the all-heterozygous assumption. In our approach, we first use new methods to decompose the input read matrix into small independent blocks and then model the problem for each block as an integer linear programming problem, which is then solved by an integer linear programming solver. We have tested our program on a single PC [a Linux (x64) desktop PC with i7-3960X CPU], using the filtered HuRef and the NA 12878 datasets (after applying some variant calling methods). With the all-heterozygous assumption, our approach can optimally solve the whole HuRef data set within a total time of 31 h (26 h for the most difficult block of the 15th chromosome and only 5 h for the other blocks). To our knowledge, this is the first time that MEC optimal solutions are completely obtained for the filtered HuRef dataset. Moreover, in the general case (without the all-heterozygous assumption), for the HuRef dataset our approach can optimally solve all the chromosomes except the most difficult block in chromosome 15 within a total time of 12 days. For both of the HuRef and NA12878 datasets, the optimal costs in the general case are sometimes much smaller than those in the all-heterozygous case. This implies that some columns in the input matrix (after applying certain variant calling methods) still correspond to false-heterozygous sites. Our program, the optimal solutions found for the HuRef dataset available at http://rnc.r.dendai.ac.jp/hapAssembly.html.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Managing healthcare information: analyzing trust.
Söderström, Eva; Eriksson, Nomie; Åhlfeldt, Rose-Mharie
2016-08-08
Purpose - The purpose of this paper is to analyze two case studies with a trust matrix tool, to identify trust issues related to electronic health records. Design/methodology/approach - A qualitative research approach is applied using two case studies. The data analysis of these studies generated a problem list, which was mapped to a trust matrix. Findings - Results demonstrate flaws in current practices and point to achieving balance between organizational, person and technology trust perspectives. The analysis revealed three challenge areas, to: achieve higher trust in patient-focussed healthcare; improve communication between patients and healthcare professionals; and establish clear terminology. By taking trust into account, a more holistic perspective on healthcare can be achieved, where trust can be obtained and optimized. Research limitations/implications - A trust matrix is tested and shown to identify trust problems on different levels and relating to trusting beliefs. Future research should elaborate and more fully address issues within three identified challenge areas. Practical implications - The trust matrix's usefulness as a tool for organizations to analyze trust problems and issues is demonstrated. Originality/value - Healthcare trust issues are captured to a greater extent and from previously unchartered perspectives.
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
Cox, R; Lowe, D R
1996-05-01
Most studies of sandstone provenance involve modal analysis of framework grains using techniques that exclude the fine-grained breakdown products of labile mineral grains and rock fragments, usually termed secondary matrix or pseudomatrix. However, the data presented here demonstrate that, when the proportion of pseudomatrix in a sandstone exceeds 10%, standard petrographic analysis can lead to incorrect provenance interpretation. Petrographic schemes for provenance analysis such as QFL and QFR should not therefore be applied to sandstones containing more than 10% secondary matrix. Pseudomatrix is commonly abundant in sandstones, and this is therefore a problem for provenance analysis. The difficulty can be alleviated by the use of whole-rock chemistry in addition to petrographic analysis. Combination of chemical and point-count data permits the construction of normative compositions that approximate original framework grain compositions. Provenance analysis is also complicated in many cases by fundamental compositional alteration during weathering and transport. Many sandstones, particularly shallow marine deposits, have undergone vigorous reworking, which may destroy unstable mineral grains and rock fragments. In such cases it may not be possible to retrieve provenance information by either petrographic or chemical means. Because of this, pseudomatrix-rich sandstones should be routinely included in chemical-petrological provenance analysis. Because of the many factors, both pre- and post-depositional, that operate to increase the compositional maturity of sandstones, petrologic studies must include a complete inventory of matrix proportions, grain size and sorting parameters, and an assessment of depositional setting.
ERIC Educational Resources Information Center
Onega, Ronald J.
1969-01-01
Three problems in radioactive buildup and decay are presented and solved. Matrix algebra is used to solve the second problem. The third problem deals with flux depression and is solved by the use of differential equations. (LC)
Problems In Indoor Mapping and Modelling
NASA Astrophysics Data System (ADS)
Zlatanova, S.; Sithole, G.; Nakagawa, M.; Zhu, Q.
2013-11-01
Research in support of indoor mapping and modelling (IMM) has been active for over thirty years. This research has come in the form of As-Built surveys, Data structuring, Visualisation techniques, Navigation models and so forth. Much of this research is founded on advancements in photogrammetry, computer vision and image analysis, computer graphics, robotics, laser scanning and many others. While IMM used to be the privy of engineers, planners, consultants, contractors, and designers, this is no longer the case as commercial enterprises and individuals are also beginning to apply indoor models in their business process and applications. There are three main reasons for this. Firstly, the last two decades have seen greater use of spatial information by enterprises and the public. Secondly, IMM has been complimented by advancements in mobile computing and internet communications, making it easier than ever to access and interact with spatial information. Thirdly, indoor modelling has been advanced geometrically and semantically, opening doors for developing user-oriented, context-aware applications. This reshaping of the public's attitude and expectations with regards to spatial information has realised new applications and spurred demand for indoor models and the tools to use them. This paper examines the present state of IMM and considers the research areas that deserve attention in the future. In particular the paper considers problems in IMM that are relevant to commercial enterprises and the general public, groups this paper expects will emerge as the greatest users IMM. The subject of indoor modelling and mapping is discussed here in terms of Acquisitions and Sensors, Data Structures and Modelling, Visualisation, Applications, Legal Issues and Standards. Problems are discussed in terms of those that exist and those that are emerging. Existing problems are those that are currently being researched. Emerging problems are those problems or demands that are expected to arise because of social changes, technological advancements, or commercial interests. The motivation of this work is to define a set of research problems that are either being investigated or should be investigated. These will hopefully provide a framework for assessing progress and advances in indoor modelling. The framework will be developed in the form of a problem matrix, detailing existing and emerging problems, their solutions and present best practices. Once the framework is complete it will be published online so that the IMM community can discuss and modify it as necessary. When the framework has reached a steady state an empirical benchmark will be provided to test solutions to posed problems. A yearly evaluation of the problem matrix will follow, the results of which will be published.
Colonization of bone matrices by cellular components
NASA Astrophysics Data System (ADS)
Shchelkunova, E. I.; Voropaeva, A. A.; Korel, A. V.; Mayer, D. A.; Podorognaya, V. T.; Kirilova, I. A.
2017-09-01
Practical surgery, traumatology, orthopedics, and oncology require bioengineered constructs suitable for replacement of large-area bone defects. Only rigid/elastic matrix containing recipient's bone cells capable of mitosis, differentiation, and synthesizing extracellular matrix that supports cell viability can comply with these requirements. Therefore, the development of the techniques to produce structural and functional substitutes, whose three-dimensional structure corresponds to the recipient's damaged tissues, is the main objective of tissue engineering. This is achieved by developing tissue-engineering constructs represented by cells placed on the matrices. Low effectiveness of carrier matrix colonization with cells and their uneven distribution is one of the major problems in cell culture on various matrixes. In vitro studies of the interactions between cells and material, as well as the development of new techniques for scaffold colonization by cellular components are required to solve this problem.
Yi, Sun; Nelson, Patrick W; Ulsoy, A Galip
2007-04-01
In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in lin ear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.
Portfolio optimization and the random magnet problem
NASA Astrophysics Data System (ADS)
Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.
2002-08-01
Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.
NASA Technical Reports Server (NTRS)
Callier, F. M.; Nahum, C. D.
1975-01-01
The series connection of two linear time-invariant systems that have minimal state space system descriptions is considered. From these descriptions, strict-system-equivalent polynomial matrix system descriptions in the manner of Rosenbrock are derived. They are based on the factorization of the transfer matrix of the subsystems as a ratio of two right or left coprime polynomial matrices. They give rise to a simple polynomial matrix system description of the tandem connection. Theorem 1 states that for the complete controllability and observability of the state space system description of the series connection, it is necessary and sufficient that certain 'denominator' and 'numerator' groups are coprime. Consequences for feedback systems are drawn in Corollary 1. The role of pole-zero cancellations is explained by Lemma 3 and Corollaires 2 and 3.
Constraints on scattering amplitudes in multistate Landau-Zener theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitsyn, Nikolai A.; Lin, Jeffmin; Chernyak, Vladimir Y.
2017-01-30
Here, we derive a set of constraints, which we will call hierarchy constraints, on scattering amplitudes of an arbitrary multistate Landau-Zener model (MLZM). The presence of additional symmetries can transform such constraints into nontrivial relations between elements of the transition probability matrix. This observation can be used to derive complete solutions of some MLZMs or, for models that cannot be solved completely, to reduce the number of independent elements of the transition probability matrix.
SparRec: An effective matrix completion framework of missing data imputation for GWAS
NASA Astrophysics Data System (ADS)
Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen
2016-10-01
Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.
Estimation of a cover-type change matrix from error-prone data
Steen Magnussen
2009-01-01
Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...
Algebraic multigrid methods applied to problems in computational structural mechanics
NASA Technical Reports Server (NTRS)
Mccormick, Steve; Ruge, John
1989-01-01
The development of algebraic multigrid (AMG) methods and their application to certain problems in structural mechanics are described with emphasis on two- and three-dimensional linear elasticity equations and the 'jacket problems' (three-dimensional beam structures). Various possible extensions of AMG are also described. The basic idea of AMG is to develop the discretization sequence based on the target matrix and not the differential equation. Therefore, the matrix is analyzed for certain dependencies that permit the proper construction of coarser matrices and attendant transfer operators. In this manner, AMG appears to be adaptable to structural analysis applications.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Partitioning Rectangular and Structurally Nonsymmetric Sparse Matrices for Parallel Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
B. Hendrickson; T.G. Kolda
1998-09-01
A common operation in scientific computing is the multiplication of a sparse, rectangular or structurally nonsymmetric matrix and a vector. In many applications the matrix- transpose-vector product is also required. This paper addresses the efficient parallelization of these operations. We show that the problem can be expressed in terms of partitioning bipartite graphs. We then introduce several algorithms for this partitioning problem and compare their performance on a set of test matrices.
An improved error bound for linear complementarity problems for B-matrices.
Gao, Lei; Li, Chaoqian
2017-01-01
A new error bound for the linear complementarity problem when the matrix involved is a B -matrix is presented, which improves the corresponding result in (Li et al. in Electron. J. Linear Algebra 31(1):476-484, 2016). In addition some sufficient conditions such that the new bound is sharper than that in (García-Esnaola and Peña in Appl. Math. Lett. 22(7):1071-1075, 2009) are provided.
Kim, Yoon Jae; Kim, Yoon Young
2010-10-01
This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
NASA Astrophysics Data System (ADS)
Lee, Gibbeum; Cho, Yeunwoo
2018-01-01
A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.
NASA Astrophysics Data System (ADS)
Kushch, Volodymyr I.; Sevostianov, Igor; Giraud, Albert
2017-11-01
An accurate semi-analytical solution of the conductivity problem for a composite with anisotropic matrix and arbitrarily oriented anisotropic ellipsoidal inhomogeneities has been obtained. The developed approach combines the superposition principle with the multipole expansion of perturbation fields of inhomogeneities in terms of ellipsoidal harmonics and reduces the boundary value problem to an infinite system of linear algebraic equations for the induced multipole moments of inhomogeneities. A complete full-field solution is obtained for the multi-particle models comprising inhomogeneities of diverse shape, size, orientation and properties which enables an adequate account for the microstructure parameters. The solution is valid for the general-type anisotropy of constituents and arbitrary orientation of the orthotropy axes. The effective conductivity tensor of the particulate composite with anisotropic constituents is evaluated in the framework of the generalized Maxwell homogenization scheme. Application of the developed method to composites with imperfect ellipsoidal interfaces is straightforward. Their incorporation yields probably the most general model of a composite that may be considered in the framework of analytical approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zanetti, F.M.; Vicentini, E.; Luz, M.G.E. da
It was proposed about a decade ago [M.G.E. da Luz, A.S. Lupu-Sax, E.J. Heller, Phys. Rev. E 56 (1997) 2496] a simple approach for obtaining scattering states for arbitrary disconnected open or closed boundaries C, with different boundary conditions. Since then, the so called boundary wall method has been successfully used to solve different open boundary problems. However, its applicability to closed shapes has not been fully explored. In this contribution we present a complete account of how to use the boundary wall to the case of billiard systems. We review the general ideas and particularize them to single connectedmore » closed shapes, assuming Dirichlet boundary conditions for the C's. We discuss the mathematical aspects that lead to both the inside and outside solutions. We also present a different way to calculate the exterior scattering S matrix. From it, we revisit the important inside-outside duality for billiards. Finally, we give some numerical examples, illustrating the efficiency and flexibility of the method to treat this type of problem.« less
Entanglement dynamics in a non-Markovian environment: An exactly solvable model
NASA Astrophysics Data System (ADS)
Wilson, Justin H.; Fregoso, Benjamin M.; Galitski, Victor M.
2012-05-01
We study the non-Markovian effects on the dynamics of entanglement in an exactly solvable model that involves two independent oscillators, each coupled to its own stochastic noise source. First, we develop Lie algebraic and functional integral methods to find an exact solution to the single-oscillator problem which includes an analytic expression for the density matrix and the complete statistics, i.e., the probability distribution functions for observables. For long bath time correlations, we see nonmonotonic evolution of the uncertainties in observables. Further, we extend this exact solution to the two-particle problem and find the dynamics of entanglement in a subspace. We find the phenomena of “sudden death” and “rebirth” of entanglement. Interestingly, all memory effects enter via the functional form of the energy and hence the time of death and rebirth is controlled by the amount of noisy energy added into each oscillator. If this energy increases above (decreases below) a threshold, we obtain sudden death (rebirth) of entanglement.
NASA Astrophysics Data System (ADS)
Ledet, Lasse S.; Sorokin, Sergey V.
2018-03-01
The paper addresses the classical problem of time-harmonic forced vibrations of a fluid-filled cylindrical shell considered as a multi-modal waveguide carrying infinitely many waves. The forced vibration problem is solved using tailored Green's matrices formulated in terms of eigenfunction expansions. The formulation of Green's matrix is based on special (bi-)orthogonality relations between the eigenfunctions, which are derived here for the fluid-filled shell. Further, the relations are generalised to any multi-modal symmetric waveguide. Using the orthogonality relations the transcendental equation system is converted into algebraic modal equations that can be solved analytically. Upon formulation of Green's matrices the solution space is studied in terms of completeness and convergence (uniformity and rate). Special features and findings exposed only through this modal decomposition method are elaborated and the physical interpretation of the bi-orthogonality relation is discussed in relation to the total energy flow which leads to derivation of simplified equations for the energy flow components.
Electroluminescence from completely horizontally oriented dye molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komino, Takeshi; Center for Organic Photonics and Electronics Research, Kyushu University, 744 Motooka, Nishi, Fukuoka 819-0395; Japan Science and Technology Agency, ERATO, Adachi Molecular Exciton Engineering Project, 744 Motooka, Nishi, Fukuoka 819-0395
2016-06-13
A complete horizontal molecular orientation of a linear-shaped thermally activated delayed fluorescent guest emitter 2,6-bis(4-(10Hphenoxazin-10-yl)phenyl)benzo[1,2-d:5,4-d′] bis(oxazole) (cis-BOX2) was obtained in a glassy host matrix by vapor deposition. The orientational order of cis-BOX2 depended on the combination of deposition temperature and the type of host matrix. Complete horizontal orientation was obtained when a thin film with cis-BOX2 doped in a 4,4′-bis(N-carbazolyl)-1,1′-biphenyl (CBP) host matrix was fabricated at 200 K. The ultimate orientation of guest molecules originates from not only the kinetic relaxation but also the kinetic stability of the deposited guest molecules on the film surface during film growth. Utilizing the ultimatemore » orientation, a highly efficient organic light-emitting diode with the external quantum efficiency of 33.4 ± 2.0% was realized. The thermal stability of the horizontal orientation of cis-BOX2 was governed by the glass transition temperature (T{sub g}) of the CBP host matrix; the horizontal orientation was stable unless the film was annealed above T{sub g}.« less
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
POLARIZED LINE FORMATION WITH LOWER-LEVEL POLARIZATION AND PARTIAL FREQUENCY REDISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supriya, H. D.; Sampoorna, M.; Nagendra, K. N.
2016-09-10
In the well-established theories of polarized line formation with partial frequency redistribution (PRD) for a two-level and two-term atom, it is generally assumed that the lower level of the scattering transition is unpolarized. However, the existence of unexplained spectral features in some lines of the Second Solar Spectrum points toward a need to relax this assumption. There exists a density matrix theory that accounts for the polarization of all the atomic levels, but it is based on the flat-spectrum approximation (corresponding to complete frequency redistribution). In the present paper we propose a numerical algorithm to solve the problem of polarizedmore » line formation in magnetized media, which includes both the effects of PRD and the lower level polarization (LLP) for a two-level atom. First we derive a collisionless redistribution matrix that includes the combined effects of the PRD and the LLP. We then solve the relevant transfer equation using a two-stage approach. For illustration purposes, we consider two case studies in the non-magnetic regime, namely, the J {sub a} = 1, J {sub b} = 0 and J {sub a} = J {sub b} = 1, where J {sub a} and J {sub b} represent the total angular momentum quantum numbers of the lower and upper states, respectively. Our studies show that the effects of LLP are significant only in the line core. This leads us to propose a simplified numerical approach to solve the concerned radiative transfer problem.« less
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication
Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...
2015-01-01
The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less
Peripheral nerve conduits: technology update
Arslantunali, D; Dursun, T; Yucel, D; Hasirci, N; Hasirci, V
2014-01-01
Peripheral nerve injury is a worldwide clinical problem which could lead to loss of neuronal communication along sensory and motor nerves between the central nervous system (CNS) and the peripheral organs and impairs the quality of life of a patient. The primary requirement for the treatment of complete lesions is a tension-free, end-to-end repair. When end-to-end repair is not possible, peripheral nerve grafts or nerve conduits are used. The limited availability of autografts, and drawbacks of the allografts and xenografts like immunological reactions, forced the researchers to investigate and develop alternative approaches, mainly nerve conduits. In this review, recent information on the various types of conduit materials (made of biological and synthetic polymers) and designs (tubular, fibrous, and matrix type) are being presented. PMID:25489251
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Zhaojun; Yang, Chao
What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcasemore » some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.« less
A penny-shaped crack in a filament reinforced matrix. 1: The filament model
NASA Technical Reports Server (NTRS)
Erdogan, F.; Pacella, A. H.
1973-01-01
The electrostatic problem of a penny-shaped crack in an elastic matrix which reinforced by filaments or fibers perpendicular to the plane of the crack was studied. The elastic filament model was developed for application to evaluation studies of the stress intensity factor along the periphery of the crack, the stresses in the filaments or fibers, and the interface shear between the matrix and the filaments or fibers. The requirements expected of the model are a sufficiently accurate representation of the filament and applicability to the interaction problems involving a cracked elastic continuum with multi-filament reinforcements. The technique for developing the model and numerical examples of it are shown.
Penny-shaped crack in a fiber-reinforced matrix. [elastostatics
NASA Technical Reports Server (NTRS)
Narayanan, T. V.; Erdogan, F.
1974-01-01
Using a slender inclusion model developed earlier, the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles, the problem was reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given, and distribution of the stress intensity factor along the crack border was obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.
Penny-shaped crack in a fiber-reinforced matrix
NASA Technical Reports Server (NTRS)
Narayanan, T. V.; Erdogan, F.
1975-01-01
Using the slender inclusion model developed earlier the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles the problem is reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given and distribution of the stress intensity factor along the crack border is obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.
NASA Technical Reports Server (NTRS)
Lakin, W. D.
1981-01-01
The use of integrating matrices in solving differential equations associated with rotating beam configurations is examined. In vibration problems, by expressing the equations of motion of the beam in matrix notation, utilizing the integrating matrix as an operator, and applying the boundary conditions, the spatial dependence is removed from the governing partial differential equations and the resulting ordinary differential equations can be cast into standard eigenvalue form. Integrating matrices are derived based on two dimensional rectangular grids with arbitrary grid spacings allowed in one direction. The derivation of higher dimensional integrating matrices is the initial step in the generalization of the integrating matrix methodology to vibration and stability problems involving plates and shells.
Isotropic matrix elements of the collision integral for the Boltzmann equation
NASA Astrophysics Data System (ADS)
Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.
2017-09-01
We have proposed an algorithm for constructing matrix elements of the collision integral for the nonlinear Boltzmann equation isotropic in velocities. These matrix elements have been used to start the recurrent procedure for calculating matrix elements of the velocity-nonisotropic collision integral described in our previous publication. In addition, isotropic matrix elements are of independent interest for calculating isotropic relaxation in a number of physical kinetics problems. It has been shown that the coefficients of expansion of isotropic matrix elements in Ω integrals are connected by the recurrent relations that make it possible to construct the procedure of their sequential determination.
Effect of spaceflight on the extracellular matrix of skeletal muscle after a crush injury
NASA Technical Reports Server (NTRS)
Stauber, W. T.; Fritz, V. K.; Burkovskaia, T. E.; Il'ina-Kakueva, E. I.
1992-01-01
The organization and composition of the extracellular matrix were studied in the crush-injured gastrocnemius muscle of rats subjected to 0 G. After 14 days of flight on Cosmos 2044, the gastrocnemius muscle was removed and evaluated by histochemical and immunohistochemical techniques from the five injured flight rodents and various earth-based treatment groups. In general, the repair process was similar in all injured muscle samples with regard to the organization of the extracellular matrix and myofibers. Small and large myofibers were present within an expanded extracellular matrix, indicative of myogenesis and muscle regeneration. In the tail-suspended animals, a more complete repair was observed with nonenlarged area of nonmuscle cells or matrix material visible. In contrast, the muscle samples from the flight animals were less well organized and contained more macrophages and blood vessels in the repair region, indicative of a delayed repair process, but did not demonstrate any chronic inflammation. Myofiber repair did vary in muscles from the different groups, being slowest in the flight animals and most complete in the tail-suspended ones.
Matrix analysis and risk management to avert depression and suicide among workers
2010-01-01
Suicide is among the most tragic outcomes of all mental disorders, and the prevalence of suicide has risen dramatically during the last decade, particularly among workers. This paper reviews and proposes strategies to avert suicide and depression with regard to the mind body medicine equation hypothesis, metrics analysis of mental health problems from a public health and clinical medicine view. In occupational fields, the mind body medicine hypothesis has to deal with working environment, working condition, and workers' health. These three factors chosen in this paper were based on the concept of risk control, called San-kanri, which has traditionally been used in Japanese companies, and the causation concepts of host, agent, and environment. Working environment and working condition were given special focus with regard to tackling suicide problems. Matrix analysis was conducted by dividing the problem of working conditions into nine cells: three prevention levels (primary, secondary, and tertiary) were proposed for each of the three factors of the mind body medicine hypothesis (working environment, working condition, and workers' health). After using these main strategies (mind body medicine analysis and matrix analysis) to tackle suicide problems, the paper talks about the versatility of case-method teaching, "Hiyari-Hat activity," routine inspections by professionals, risk assessment analysis, and mandatory health check-up focusing on sleep and depression. In the risk assessment analysis, an exact assessment model was suggested using a formula based on multiplication of the following three factors: (1) severity, (2) frequency, and (3) possibility. Mental health problems, including suicide, are rather tricky to deal with because they involve evaluation of individual cases. The mind body medicine hypothesis and matrix analysis would be appropriate tactics for suicide prevention because they would help the evaluation of this issue as a tangible problem. PMID:21054837
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Fine-granularity inference and estimations to network traffic for SDN.
Jiang, Dingde; Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.
Research and simulation of the decoupling transformation in AC motor vector control
NASA Astrophysics Data System (ADS)
He, Jiaojiao; Zhao, Zhongjie; Liu, Ken; Zhang, Yongping; Yao, Tuozhong
2018-04-01
Permanent magnet synchronous motor (PMSM) is a nonlinear, strong coupling, multivariable complex object, and transformation decoupling can solve the coupling problem of permanent magnet synchronous motor. This paper gives a permanent magnet synchronous motor (PMSM) mathematical model, introduces the permanent magnet synchronous motor vector control coordinate transformation in the process of modal matrix inductance matrix transform through the matrix related knowledge of different coordinates of diagonalization, which makes the coupling between the independent, realize the control of motor current and excitation the torque current coupling separation, and derived the coordinate transformation matrix, the thought to solve the coupling problem of AC motor. Finally, in the Matlab/Simulink environment, through the establishment and combination between the PMSM ontology, coordinate conversion module, built the simulation model of permanent magnet synchronous motor vector control, introduces the model of each part, and analyzed the simulation results.
NASA Astrophysics Data System (ADS)
Qing, Zhou; Weili, Jiao; Tengfei, Long
2014-03-01
The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.
Construction of the Dependence Matrix Based on the TRIZ Contradiction Matrix in OOD
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Quan; Wang, Yanling; Luo, Tao
In the Object-Oriented software design (OOD), design of the class and object, definition of the classes’ interface and inheritance levels and determination of dependent relations have a serious impact on the reusability and flexibility of the system. According to the concrete problems of design, how to select the right solution from the hundreds of the design schemas which has become the focus of attention of designers. After analyzing lots of software design schemas in practice and Object-Oriented design patterns, this paper constructs the dependence matrix of Object-Oriented software design filed, referring to contradiction matrix of TRIZ (Theory of Inventive Problem Solving) proposed by the former Soviet Union innovation master Altshuller. As the practice indicates, it provides a intuitive, common and standardized method for designers to choose the right design schema. Make research and communication more effectively, and also improve the software development efficiency and software quality.
Fine-granularity inference and estimations to network traffic for SDN
Huo, Liuwei; Li, Ya
2018-01-01
An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective. PMID:29718913
Preventing Spacecraft Failures Due to Tribological Problems
NASA Technical Reports Server (NTRS)
Fusaro, Robert L.
2001-01-01
Many mechanical failures that occur on spacecraft are caused by tribological problems. This publication presents a study that was conducted by the author on various preventatives, analyses, controls and tests (PACTs) that could be used to prevent spacecraft mechanical system failure. A matrix is presented in the paper that plots tribology failure modes versus various PACTs that should be performed before a spacecraft is launched in order to insure success. A strawman matrix was constructed by the author and then was sent out to industry and government spacecraft designers, scientists and builders of spacecraft for their input. The final matrix is the result of their input. In addition to the matrix, this publication describes the various PACTs that can be performed and some fundamental knowledge on the correct usage of lubricants for spacecraft applications. Even though the work was done specifically to prevent spacecraft failures the basic methodology can be applied to other mechanical system areas.
Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure
Hill, Mary C.
1990-01-01
The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.
Risk Management using Dependency Stucture Matrix
NASA Astrophysics Data System (ADS)
Petković, Ivan
2011-09-01
An efficient method based on dependency structure matrix (DSM) analysis is given for ranking risks in a complex system or process whose entities are mutually dependent. This rank is determined according to the element's values of the unique positive eigenvector which corresponds to the matrix spectral radius modeling the considered engineering system. For demonstration, the risk problem of NASA's robotic spacecraft is analyzed.
Fabrication of metal matrix composites by powder metallurgy: A review
NASA Astrophysics Data System (ADS)
Manohar, Guttikonda; Dey, Abhijit; Pandey, K. M.; Maity, S. R.
2018-04-01
Now a day's metal matrix components are used in may industries and it finds the applications in many fields so, to make it as better performable materials. So, the need to increase the mechanical properties of the composites is there. As seen from previous studies major problem faced by the MMC's are wetting, interface bonding between reinforcement and matrix material while they are prepared by conventional methods like stir casting, squeeze casting and other techniques which uses liquid molten metals. So many researchers adopt PM to eliminate these defects and to increase the mechanical properties of the composites. Powder metallurgy is one of the better ways to prepare composites and Nano composites. And the major problem faced by the conventional methods are uniform distribution of the reinforcement particles in the matrix alloy, many researchers tried to homogeneously dispersion of reinforcements in matrix but they find it difficult through conventional methods, among all they find ultrasonic dispersion is efficient. This review article is mainly concentrated on importance of powder metallurgy in homogeneous distribution of reinforcement in matrix by ball milling or mechanical milling and how powder metallurgy improves the mechanical properties of the composites.
IOL calculation using paraxial matrix optics.
Haigis, Wolfgang
2009-07-01
Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.
Finite-element grid improvement by minimization of stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.
1989-01-01
A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.
Finite-element grid improvement by minimization of stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.
1987-01-01
A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.
2014-01-07
this can have a disastrous effect on convergence rate. Even if steady state is obtained for low Mach number flows (after many iterations ), the results...rally lead do a diagonally dominant left-hand-side matrix, which causes stability problems for implicit Gauss - Seidel schemes. For this reason, matrix... convergence at the stagnation point. The iterations for each airfoil is also reported in Fig. 2. Without preconditioning, dramatic efficiency problems are seen
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
A synoptic approach for analyzing erosion as a guide to land-use planning
Brown, William M.; Hines, Walter G.; Rickert, David A.; Beach, Gary L.
1979-01-01
A synoptic approach has been devised to delineate the relationships that exist' between physiographic factors, land-use activities, and resultant erosional problems. The approach involves the development of an erosional-depositional province map and a numerical impact matrix for rating the potential for erosional problems. The province map is prepared by collating data on the natural terrain factors that exert the dominant controls on erosion and deposition in each basin. In addition, existing erosional and depositional features are identified and mapped from color-infrared, high-altitude aerial imagery. The axes of the impact matrix are composed of weighting values for the terrain factors used in developing the map and by a second set of values for the prevalent land-use activities. The body of the matrix is composed of composite erosional-impact ratings resulting from the product of the factor sets. Together the province map and problem matrix serve as practical tools for estimating the erosional impact of human activities on different types of terrain. The approach has been applied to the Molalla River basin, Oregon, and has proven useful for the recognition of problem areas. The same approach is currently being used by the State of Oregon (in the 208 assessment of nonpoint-source pollution under Public Law 92-500) to evaluate the impact of land-management practices on stream quality.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Remote Sensing of Environmental Pollution
NASA Technical Reports Server (NTRS)
North, G. W.
1971-01-01
Environmental pollution is a problem of international scope and concern. It can be subdivided into problems relating to water, air, or land pollution. Many of the problems in these three categories lend themselves to study and possible solution by remote sensing. Through the use of remote sensing systems and techniques, it is possible to detect and monitor, and in some cases, identify, measure, and study the effects of various environmental pollutants. As a guide for making decisions regarding the use of remote sensors for pollution studies, a special five-dimensional sensor/applications matrix has been designed. The matrix defines an environmental goal, ranks the various remote sensing objectives in terms of their ability to assist in solving environmental problems, lists the environmental problems, ranks the sensors that can be used for collecting data on each problem, and finally ranks the sensor platform options that are currently available.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Cucheb: A GPU implementation of the filtered Lanczos procedure
NASA Astrophysics Data System (ADS)
Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef
2017-11-01
This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.
Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, J.G.
2011-07-01
A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less
Shadow poles in coupled-channel problems calculated with the Berggren basis
NASA Astrophysics Data System (ADS)
Id Betan, R. M.; Kruppa, A. T.; Vertse, T.
2018-02-01
Background: In coupled-channels models the poles of the scattering S matrix are located on different Riemann sheets. Physical observables are affected mainly by poles closest to the physical region but sometimes shadow poles have considerable effect too. Purpose: The purpose of this paper is to show that in coupled-channels problems all poles of the S matrix can be located by an expansion in terms of a properly constructed complex-energy basis. Method: The Berggren basis is used for expanding the coupled-channels solutions. Results: The locations of the poles of the S matrix for the Cox potential, constructed for coupled-channels problems, were numerically calculated and compared with the exact ones. In a nuclear physics application the Jπ=3 /2+ resonant poles of 5He were calculated in a phenomenological two-channel model. The properties of both the normal and shadow resonances agree with previous findings. Conclusions: We have shown that, with an appropriately chosen Berggren basis, all poles of the S matrix including the shadow poles can be determined. We have found that the shadow pole of 5He migrates between Riemann sheets if the coupling strength is varied.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
Expendable launch vehicle studies
NASA Technical Reports Server (NTRS)
Bainum, Peter M.; Reiss, Robert
1995-01-01
Analytical support studies of expendable launch vehicles concentrate on the stability of the dynamics during launch especially during or near the region of maximum dynamic pressure. The in-plane dynamic equations of a generic launch vehicle with multiple flexible bending and fuel sloshing modes are developed and linearized. The information from LeRC about the grids, masses, and modes is incorporated into the model. The eigenvalues of the plant are analyzed for several modeling factors: utilizing diagonal mass matrix, uniform beam assumption, inclusion of aerodynamics, and the interaction between the aerodynamics and the flexible bending motion. Preliminary PID, LQR, and LQG control designs with sensor and actuator dynamics for this system and simulations are also conducted. The initial analysis for comparison of PD (proportional-derivative) and full state feedback LQR Linear quadratic regulator) shows that the split weighted LQR controller has better performance than that of the PD. In order to meet both the performance and robustness requirements, the H(sub infinity) robust controller for the expendable launch vehicle is developed. The simulation indicates that both the performance and robustness of the H(sub infinity) controller are better than that for the PID and LQG controllers. The modelling and analysis support studies team has continued development of methodology, using eigensensitivity analysis, to solve three classes of discrete eigenvalue equations. In the first class, the matrix elements are non-linear functions of the eigenvector. All non-linear periodic motion can be cast in this form. Here the eigenvector is comprised of the coefficients of complete basis functions spanning the response space and the eigenvalue is the frequency. The second class of eigenvalue problems studied is the quadratic eigenvalue problem. Solutions for linear viscously damped structures or viscoelastic structures can be reduced to this form. Particular attention is paid to Maxwell and Kelvin models. The third class of problems consists of linear eigenvalue problems in which the elements of the mass and stiffness matrices are stochastic. dynamic structural response for which the parameters are given by probabilistic distribution functions, rather than deterministic values, can be cast in this form. Solutions for several problems in each class will be presented.
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
Complete spatiotemporal characterization and optical transfer matrix inversion of a 420 mode fiber.
Carpenter, Joel; Eggleton, Benjamin J; Schröder, Jochen
2016-12-01
The ability to measure a scattering medium's optical transfer matrix, the mapping between any spatial input and output, has enabled applications such as imaging to be performed through media which would otherwise be opaque due to scattering. However, the scattering of light occurs not just in space, but also in time. We complete the characterization of scatter by extending optical transfer matrix methods into the time domain, allowing any spatiotemporal input state at one end to be mapped directly to its corresponding spatiotemporal output state. We have measured the optical transfer function of a multimode fiber in its entirety; it consists of 420 modes in/out at 32768 wavelengths, the most detailed complete characterization of multimode waveguide light propagation to date, to the best of our knowledge. We then demonstrate the ability to generate any spatial/polarization state at the output of the fiber at any wavelength, as well as predict the temporal response of any spatial/polarization input state.
An algorithm for solving an arbitrary triangular fully fuzzy Sylvester matrix equations
NASA Astrophysics Data System (ADS)
Daud, Wan Suhana Wan; Ahmad, Nazihah; Malkawi, Ghassan
2017-11-01
Sylvester matrix equations played a prominent role in various areas including control theory. Considering to any un-certainty problems that can be occurred at any time, the Sylvester matrix equation has to be adapted to the fuzzy environment. Therefore, in this study, an algorithm for solving an arbitrary triangular fully fuzzy Sylvester matrix equation is constructed. The construction of the algorithm is based on the max-min arithmetic multiplication operation. Besides that, an associated arbitrary matrix equation is modified in obtaining the final solution. Finally, some numerical examples are presented to illustrate the proposed algorithm.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P.; Carrasco, Miguel Á.
2016-01-01
This study examines the relationships between perceived parental acceptance and children’s behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children’s (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother–father or child–father combinations seem to be the best way to optimize the multi-informant method in order to predict children’s behavioral problems based on perceived parental acceptance. PMID:27242582
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P; Carrasco, Miguel Á
2016-01-01
This study examines the relationships between perceived parental acceptance and children's behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children's (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother-father or child-father combinations seem to be the best way to optimize the multi-informant method in order to predict children's behavioral problems based on perceived parental acceptance.
Yang, Chifu; Zhao, Jinsong; Li, Liyi; Agrawal, Sunil K
2018-01-01
Robotic spine brace based on parallel-actuated robotic system is a new device for treatment and sensing of scoliosis, however, the strong dynamic coupling and anisotropy problem of parallel manipulators result in accuracy loss of rehabilitation force control, including big error in direction and value of force. A novel active force control strategy named modal space force control is proposed to solve these problems. Considering the electrical driven system and contact environment, the mathematical model of spatial parallel manipulator is built. The strong dynamic coupling problem in force field is described via experiments as well as the anisotropy problem of work space of parallel manipulators. The effects of dynamic coupling on control design and performances are discussed, and the influences of anisotropy on accuracy are also addressed. With mass/inertia matrix and stiffness matrix of parallel manipulators, a modal matrix can be calculated by using eigenvalue decomposition. Making use of the orthogonality of modal matrix with mass matrix of parallel manipulators, the strong coupled dynamic equations expressed in work space or joint space of parallel manipulator may be transformed into decoupled equations formulated in modal space. According to this property, each force control channel is independent of others in the modal space, thus we proposed modal space force control concept which means the force controller is designed in modal space. A modal space active force control is designed and implemented with only a simple PID controller employed as exampled control method to show the differences, uniqueness, and benefits of modal space force control. Simulation and experimental results show that the proposed modal space force control concept can effectively overcome the effects of the strong dynamic coupling and anisotropy problem in the physical space, and modal space force control is thus a very useful control framework, which is better than the current joint space control and work space control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fast polar decomposition of an arbitrary matrix
NASA Technical Reports Server (NTRS)
Higham, Nicholas J.; Schreiber, Robert S.
1988-01-01
The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.
Conditioned invariant subspaces, disturbance decoupling and solutions of rational matrix equations
NASA Technical Reports Server (NTRS)
Li, Z.; Sastry, S. S.
1986-01-01
Conditioned invariant subspaces are introduced both in terms of output injection and in terms of state estimation. Various properties of these subspaces are explored and the problem of disturbance decoupling by output injection (OIP) is defined. It is then shown that OIP is equivalent to the problem of disturbance decoupled estimation as introduced in Willems (1982) and Willems and Commault (1980). Both solvability conditions and a description of solutions for a class of rational matrix equations of the form X(s)M(s) = Q(s) on several ways are given in state-space form. Finally, the problem of output stabilization with respect to a disturbance is briefly addressed.
Density-matrix-based algorithm for solving eigenvalue problems
NASA Astrophysics Data System (ADS)
Polizzi, Eric
2009-03-01
A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
Gas and Liquid Permeability Measurements in Wolfcamp Samples
NASA Astrophysics Data System (ADS)
Bhandari, A. R.; Flemings, P. B.; Ramiro-Ramirez, S.; Polito, P. J.
2017-12-01
Argon gas and liquid (dodecane) permeability measurements in three mixed quality Wolfcamp samples demonstrate it is possible to close multiple bedding parallel open artificial micro-fractures and obtain representative matrix permeability by applying two confining stress cycles at a constant pore pressure under effective stresses ranging from 6.9 MPa to 59.7 MPa. The fractured sample (with no bridging-cement in fractures) exhibited a three order decrease in permeability from 4.4×10-17 m2 to 2.1×10-20 m2. In contrast, the most intact sample exhibited initial liquid permeability of 1.61×10-19 m2 that declined gradually to 2.0×10-20 m2 over the same effective stress range. A third sample, that contained a bridging-cement (gypsum) fracture, exhibited much higher initial liquid permeability of 2.8×10-15 m2 and declined gradually to 1.3×10-17 m2 with stress; this suggested that it is difficult to close partially cemented fractures and that the permeability we measured was impacted by the presence of a propped-fracture and not the matrix. We developed a new permeability testing protocol and analytical approaches to interpret the evolution of fractures and resolve the matrix permeability using matrix permeability estimates based on initial pulse decay gas permeability measurements at effective stress of 6.9 MPa. The tested samples are an argillaceous siliceous siltstone facies within the Wolfcamp Formation. A better understanding of permeability will lead to new approaches to determine the best completion and production strategies and, more importantly, to reduce the high water cut problem in Wolfcamp reservoirs.
Matrix multiplication on the Intel Touchstone Delta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huss-Lederman, S.; Jacobson, E.M.; Tsao, A.
1993-12-31
Matrix multiplication is a key primitive in block matrix algorithms such as those found in LAPACK. We present results from our study of matrix multiplication algorithms on the Intel Touchstone Delta, a distributed memory message-passing architecture with a two-dimensional mesh topology. We obtain an implementation that uses communication primitives highly suited to the Delta and exploits the single node assembly-coded matrix multiplication. Our algorithm is completely general, able to deal with arbitrary mesh aspect ratios and matrix dimensions, and has achieved parallel efficiency of 86% with overall peak performance in excess of 8 Gflops on 256 nodes for an 8800more » {times} 8800 matrix. We describe our algorithm design and implementation, and present performance results that demonstrate scalability and robust behavior over varying mesh topologies.« less
The program LOPT for least-squares optimization of energy levels
NASA Astrophysics Data System (ADS)
Kramida, A. E.
2011-02-01
The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.
A Strassen-Newton algorithm for high-speed parallelizable matrix inversion
NASA Technical Reports Server (NTRS)
Bailey, David H.; Ferguson, Helaman R. P.
1988-01-01
Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
Comparison Of Models Of Metal-Matrix Composites
NASA Technical Reports Server (NTRS)
Bigelow, C. A.; Johnson, W. S.; Naik, R. A.
1994-01-01
Report presents comparative review of four mathematical models of micromechanical behaviors of fiber/metal-matrix composite materials. Models differ in various details, all based on properties of fiber and matrix constituent materials, all involve square arrays of fibers continuous and parallel and all assume complete bonding between constituents. Computer programs implementing models used to predict properties and stress-vs.-strain behaviors of unidirectional- and cross-ply laminated composites made of boron fibers in aluminum matrices and silicon carbide fibers in titanium matrices. Stresses in fiber and matrix constituent materials also predicted.
On the Possibility of Ill-Conditioned Covariance Matrices in the First-Order Two-Step Estimator
NASA Technical Reports Server (NTRS)
Garrison, James L.; Axelrod, Penina; Kasdin, N. Jeremy
1997-01-01
The first-order two-step nonlinear estimator, when applied to a problem of orbital navigation, is found to occasionally produce first step covariance matrices with very low eigenvalues at certain trajectory points. This anomaly is the result of the linear approximation to the first step covariance propagation. The study of this anomaly begins with expressing the propagation of the first and second step covariance matrices in terms of a single matrix. This matrix is shown to have a rank equal to the difference between the number of first step states and the number of second step states. Furthermore, under some simplifying assumptions, it is found that the basis of the column space of this matrix remains fixed once the filter has removed the large initial state error. A test matrix containing the basis of this column space and the partial derivative matrix relating first and second step states is derived. This square test matrix, which has dimensions equal to the number of first step states, numerically drops rank at the same locations that the first step covariance does. It is formulated in terms of a set of constant vectors (the basis) and a matrix which can be computed from a reference trajectory (the partial derivative matrix). A simple example problem involving dynamics which are described by two states and a range measurement illustrate the cause of this anomaly and the application of the aforementioned numerical test in more detail.
Electromagnetic scattering calculations on the Intel Touchstone Delta
NASA Technical Reports Server (NTRS)
Cwik, Tom; Patterson, Jean; Scott, David
1992-01-01
During the first year's operation of the Intel Touchstone Delta system, software which solves the electric field integral equations for fields scattered from arbitrarily shaped objects has been transferred to the Delta. To fully realize the Delta's resources, an out-of-core dense matrix solution algorithm that utilizes some or all of the 90 Gbyte of concurrent file system (CFS) has been used. The largest calculation completed to date computes the fields scattered from a perfectly conducting sphere modeled by 48,672 unknown functions, resulting in a complex valued dense matrix needing 37.9 Gbyte of storage. The out-of-core LU matrix factorization algorithm was executed in 8.25 h at a rate of 10.35 Gflops. Total time to complete the calculation was 19.7 h-the additional time was used to compute the 48,672 x 48,672 matrix entries, solve the system for a given excitation, and compute observable quantities. The calculation was performed in 64-b precision.
Neutronic fuel element fabrication
Korton, George
2004-02-24
This disclosure describes a method for metallurgically bonding a complete leak-tight enclosure to a matrix-type fuel element penetrated longitudinally by a multiplicity of coolant channels. Coolant tubes containing solid filler pins are disposed in the coolant channels. A leak-tight metal enclosure is then formed about the entire assembly of fuel matrix, coolant tubes and pins. The completely enclosed and sealed assembly is exposed to a high temperature and pressure gas environment to effect a metallurgical bond between all contacting surfaces therein. The ends of the assembly are then machined away to expose the pin ends which are chemically leached from the coolant tubes to leave the coolant tubes with internal coolant passageways. The invention described herein was made in the course of, or under, a contract with the U.S. Atomic Energy Commission. It relates generally to fuel elements for neutronic reactors and more particularly to a method for providing a leak-tight metal enclosure for a high-performance matrix-type fuel element penetrated longitudinally by a multiplicity of coolant tubes. The planned utilization of nuclear energy in high-performance, compact-propulsion and mobile power-generation systems has necessitated the development of fuel elements capable of operating at high power densities. High power densities in turn require fuel elements having high thermal conductivities and good fuel retention capabilities at high temperatures. A metal clad fuel element containing a ceramic phase of fuel intimately mixed with and bonded to a continuous refractory metal matrix has been found to satisfy the above requirements. Metal coolant tubes penetrate the matrix to afford internal cooling to the fuel element while providing positive fuel retention and containment of fission products generated within the fuel matrix. Metal header plates are bonded to the coolant tubes at each end of the fuel element and a metal cladding or can completes the fuel-matrix enclosure by encompassing the sides of the fuel element between the header plates.
Kim, Hwan D.; Heo, Jiseung; Hwang, Yongsung; Kwak, Seon-Yeong; Park, Ok Kyu; Kim, Hyunbum; Varghese, Shyni
2015-01-01
Articular cartilage damage is a persistent and increasing problem with the aging population. Strategies to achieve complete repair or functional restoration remain a challenge. Photopolymerizing-based hydrogels have long received an attention in the cartilage tissue engineering, due to their unique bioactivities, flexible method of synthesis, range of constituents, and desirable physical characteristics. In the present study, we have introduced unique bioactivity within the photopolymerizing-based hydrogels by copolymerizing polyethylene glycol (PEG) macromers with methacrylated extracellular matrix (ECM) molecules (hyaluronic acid and chondroitin sulfate [CS]) and integrin binding peptides (RGD peptide). Results indicate that cellular morphology, as observed by the actin cytoskeleton structures, was strongly dependent on the type of ECM component as well as the presence of integrin binding moieties. Further, CS-based hydrogel with integrin binding RGD moieties increased the lubricin (or known as superficial zone protein [SZP]) gene expression of the encapsulated chondrocytes. Additionally, CS-based hydrogel displayed cell-responsive degradation and resulted in increased DNA, GAG, and collagen accumulation compared with other hydrogels. This study demonstrates that integrin-mediated interactions within CS microenvironment provide an optimal hydrogel scaffold for cartilage tissue engineering application. PMID:25266634
Peak picking NMR spectral data using non-negative matrix factorization
2014-01-01
Background Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. Results To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Conclusions Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap. PMID:24511909
Control of Surface Segregation in Bimetallic NiCr Nanoalloys Immersed in Ag Matrix
Bohra, Murtaza; Singh, Vidyadhar; Grammatikopoulos, Panagiotis; Toulkeridou, Evropi; Diaz, Rosa E.; Bobo, Jean-François; Sowwan, Mukhles
2016-01-01
Cr-surface segregation is a main roadblock encumbering many magneto-biomedical applications of bimetallic M-Cr nanoalloys (where M = Fe, Co and Ni). To overcome this problem, we developed Ni95Cr5:Ag nanocomposite as a model system, consisting of non-interacting Ni95Cr5 nanoalloys (5 ± 1 nm) immersed in non-magnetic Ag matrix by controlled simultaneous co-sputtering of Ni95Cr5 and Ag. We employed Curie temperature (TC) as an indicator of phase purity check of these nanocomposites, which is estimated to be around the bulk Ni95Cr5 value of 320 K. This confirms prevention of Cr-segregation and also entails effective control of surface oxidation. Compared to Cr-segregated Ni95Cr5 nanoalloy films and nanoclusters, we did not observe any unwanted magnetic effects such as presence Cr-antiferromagnetic transition, large non-saturation, exchange bias behavior (if any) or uncompensated higher TC values. These nanocomposites films also lose their unique magnetic properties only at elevated temperatures beyond application requirements (≥800 K), either by showing Ni-type behavior or by a complete conversion into Ni/Cr-oxides in vacuum and air environment, respectively. PMID:26750659
An extension of the finite cell method using boolean operations
NASA Astrophysics Data System (ADS)
Abedian, Alireza; Düster, Alexander
2017-05-01
In the finite cell method, the fictitious domain approach is combined with high-order finite elements. The geometry of the problem is taken into account by integrating the finite cell formulation over the physical domain to obtain the corresponding stiffness matrix and load vector. In this contribution, an extension of the FCM is presented wherein both the physical and fictitious domain of an element are simultaneously evaluated during the integration. In the proposed extension of the finite cell method, the contribution of the stiffness matrix over the fictitious domain is subtracted from the cell, resulting in the desired stiffness matrix which reflects the contribution of the physical domain only. This method results in an exponential rate of convergence for porous domain problems with a smooth solution and accurate integration. In addition, it reduces the computational cost, especially when applying adaptive integration schemes based on the quadtree/octree. Based on 2D and 3D problems of linear elastostatics, numerical examples serve to demonstrate the efficiency and accuracy of the proposed method.
NASA Technical Reports Server (NTRS)
Womble, M. E.; Potter, J. E.
1975-01-01
A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
NASA Astrophysics Data System (ADS)
Fang, M.; Hager, B. H.
2014-12-01
In geophysical applications the boundary element method (BEM) often carries the essential physics in addition to being an efficient numerical scheme. For use of the BEM in a self-gravitating uniform half-space, we made extra effort and succeeded in deriving the fundamental solution analytically in closed-form. A problem that goes deep into the heart of the classic BEM is encountered when we try to apply the new fundamental solution in BEM for deformation field induced by a magma chamber or a fluid-filled reservoir. The central issue of the BEM is the singular integral arising from determination of the boundary values. A widely employed technique is to rescale the singular boundary point into a small finite volume and then shrink it to extract the limits. This operation boils down to the calculation of the so-called C-matrix. Authors in the past take the liberty of either adding or subtracting a small volume. By subtracting a small volume, the C-matrix is (1/2)I on a smooth surface, where I is the identity matrix; by adding a small volume, we arrive at the same C-matrix in the form of I - (1/2)I. This evenness is a result of the spherical symmetry of Kelvin's fundamental solution employed. When the spherical symmetry is broken by gravity, the C-matrix is polarized. And we face the choice between right and wrong, for adding and subtracting a small volume yield different C-matrices. Close examination reveals that both derivations, addition and subtraction of a small volume, are ad hoc. To resolve the issue we revisit the Somigliana identity with a new derivation and careful step-by-step anatomy. The result proves that even though both adding and subtracting a small volume appear to twist the original boundary, only addition essentially modifies the original boundary and consequently modifies the physics of the original problem in a subtle way. The correct procedure is subtraction. We complete a new BEM theory by introducing in full analytical form what we call the singular stress tensor for the fundamental solution. We partition the stress tensor of the fundamental solution into a singular part and a regular part. In this way all singular integrals systematically shift into the easy singular stress tensor. Applications of this new BEM to deformation and gravitational perturbation induced by magma chambers of finite volume will be presented.
ERIC Educational Resources Information Center
Knol, Dirk L.; ten Berge, Jos M. F.
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Harnly, J.M.; Kane, J.S.
1984-01-01
The effect of the acid matrix, the measurement mode (height or area), the atomizer surface (unpyrolyzed and pyrolyzed graphite), the atomization mode (from the wall or from a platform), and the atomization temperature on the simultaneous electrothermal atomization of Co, Cr, Cu, Fe, Mn, Mo, Ni, V, and Zn was examined. The 5% HNO3 matrix gave rise to severe irreproducibility using a pyrolyzed tube unless the tube was properly "prepared". The 5% HCl matrix did not exhibit this problem, and no problems were observed with either matrix using an unpyrolized tube or a pyrolyzed platform. The 5% HCl matrix gave better sensitivities with a pyrolyzed tube but the two matrices were comparable for atomization from a platform. If Mo and V are to be analyzed with the other seven elements, a high atomization temperature (2700??C or greater) is necessary regardless of the matrix, the measurement mode, the atomization mode, or the atomizer surface. Simultaneous detection limits (peak height with pyrolyzed tube atomization) were comparable to those of conventional atomic absorption spectrometry using electrothermal atomization above 280 nm. Accuracies and precisions of ??10-15% were found in the 10 to 120 ng mL-1 range for the analysis of NBS acidified water standards.
Chandel, Shubham; Soni, Jalpa; Ray, Subir kumar; Das, Anwesh; Ghosh, Anirudha; Raj, Satyabrata; Ghosh, Nirmalya
2016-01-01
Information on the polarization properties of scattered light from plasmonic systems are of paramount importance due to fundamental interest and potential applications. However, such studies are severely compromised due to the experimental difficulties in recording full polarization response of plasmonic nanostructures. Here, we report on a novel Mueller matrix spectroscopic system capable of acquiring complete polarization information from single isolated plasmonic nanoparticle/nanostructure. The outstanding issues pertaining to reliable measurements of full 4 × 4 spectroscopic scattering Mueller matrices from single nanoparticle/nanostructures are overcome by integrating an efficient Mueller matrix measurement scheme and a robust eigenvalue calibration method with a dark-field microscopic spectroscopy arrangement. Feasibility of quantitative Mueller matrix polarimetry and its potential utility is illustrated on a simple plasmonic system, that of gold nanorods. The demonstrated ability to record full polarization information over a broad wavelength range and to quantify the intrinsic plasmon polarimetry characteristics via Mueller matrix inverse analysis should lead to a novel route towards quantitative understanding, analysis/interpretation of a number of intricate plasmonic effects and may also prove useful towards development of polarization-controlled novel sensing schemes. PMID:27212687
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
NASA Astrophysics Data System (ADS)
Chuluunbaatar, O.; Gusev, A. A.; Gerdt, V. P.; Rostovtsev, V. A.; Vinitsky, S. I.; Abrashkevich, A. G.; Kaschiev, M. S.; Serov, V. V.
2008-02-01
A FORTRAN 77 program is presented which calculates with the relative machine precision potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field. The potential curves are eigenvalues corresponding to the angular oblate spheroidal functions that compose adiabatic basis which depends on the radial variable as a parameter. The matrix elements of radial coupling are integrals in angular variables of the following two types: product of angular functions and the first derivative of angular functions in parameter, and product of the first derivatives of angular functions in parameter, respectively. The program calculates also the angular part of the dipole transition matrix elements (in the length form) expressed as integrals in angular variables involving product of a dipole operator and angular functions. Moreover, the program calculates asymptotic regular and irregular matrix solutions of the coupled adiabatic radial equations at the end of interval in radial variable needed for solving a multi-channel scattering problem by the generalized R-matrix method. Potential curves and radial matrix elements computed by the POTHMF program can be used for solving the bound state and multi-channel scattering problems. As a test desk, the program is applied to the calculation of the energy values, a short-range reaction matrix and corresponding wave functions with the help of the KANTBP program. Benchmark calculations for the known photoionization cross-sections are presented. Program summaryProgram title:POTHMF Catalogue identifier:AEAA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAA_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:8123 No. of bytes in distributed program, including test data, etc.:131 396 Distribution format:tar.gz Programming language:FORTRAN 77 Computer:Intel Xeon EM64T, Alpha 21264A, AMD Athlon MP, Pentium IV Xeon, Opteron 248, Intel Pentium IV Operating system:OC Linux, Unix AIX 5.3, SunOS 5.8, Solaris, Windows XP RAM:Depends on the number of radial differential equations; the number and order of finite elements; the number of radial points. Test run requires 4 MB Classification:2.5 External routines:POTHMF uses some Lapack routines, copies of which are included in the distribution (see README file for details). Nature of problem:In the multi-channel adiabatic approach the Schrödinger equation for a hydrogen-like atom in a homogeneous magnetic field of strength γ ( γ=B/B, B≅2.35×10 T is a dimensionless parameter which determines the field strength B) is reduced by separating the radial coordinate, r, from the angular variables, (θ,φ), and using a basis of the angular oblate spheroidal functions [3] to a system of second-order ordinary differential equations which contain first-derivative coupling terms [4]. The purpose of this program is to calculate potential curves and matrix elements of radial coupling needed for calculating the low-lying bound and scattering states of hydrogen-like atoms in a homogeneous magnetic field of strength 0<γ⩽1000 within the adiabatic approach [5]. The program evaluates also asymptotic regular and irregular matrix radial solutions of the multi-channel scattering problem needed to extract from the R-matrix a required symmetric shortrange open-channel reaction matrix K [6] independent from matching point [7]. In addition, the program computes the dipole transition matrix elements in the length form between the basis functions that are needed for calculating the dipole transitions between the low-lying bound and scattering states and photoionization cross sections [8]. Solution method:The angular oblate spheroidal eigenvalue problem depending on the radial variable is solved using a series expansion in the Legendre polynomials [3]. The resulting tridiagonal symmetric algebraic eigenvalue problem for the evaluation of selected eigenvalues, i.e. the potential curves, is solved by the LDLT factorization using the DSTEVR program [2]. Derivatives of the eigenfunctions with respect to the radial variable which are contained in matrix elements of the coupled radial equations are obtained by solving the inhomogeneous algebraic equations. The corresponding algebraic problem is solved by using the LDLT factorization with the help of the DPTTRS program [2]. Asymptotics of the matrix elements at large values of radial variable are computed using a series expansion in the associated Laguerre polynomials [9]. The corresponding matching points between the numeric and asymptotic solutions are found automatically. These asymptotics are used for the evaluation of the asymptotic regular and irregular matrix radial solutions of the multi-channel scattering problem [7]. As a test desk, the program is applied to the calculation of the energy values of the ground and excited bound states and reaction matrix of multi-channel scattering problem for a hydrogen atom in a homogeneous magnetic field using the KANTBP program [10]. Restrictions:The computer memory requirements depend on: the number of radial differential equations; the number and order of finite elements; the total number of radial points. Restrictions due to dimension sizes can be changed by resetting a small number of PARAMETER statements before recompiling (see Introduction and listing for details). Running time:The running time depends critically upon: the number of radial differential equations; the number and order of finite elements; the total number of radial points on interval [r,r]. The test run which accompanies this paper took 7 s required for calculating of potential curves, radial matrix elements, and dipole transition matrix elements on a finite-element grid on interval [ r=0, r=100] used for solving discrete and continuous spectrum problems and obtaining asymptotic regular and irregular matrix radial solutions at r=100 for continuous spectrum problem on the Intel Pentium IV 2.4 GHz. The number of radial differential equations was equal to 6. The accompanying test run using the KANTBP program took 2 s for solving discrete and continuous spectrum problems using the above calculated potential curves, matrix elements and asymptotic regular and irregular matrix radial solutions. Note, that in the accompanied benchmark calculations of the photoionization cross-sections from the bound states of a hydrogen atom in a homogeneous magnetic field to continuum we have used interval [ r=0, r=1000] for continuous spectrum problem. The total number of radial differential equations was varied from 10 to 18. References:W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986. http://www.netlib.org/lapack/. M. Abramovits, I.A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1965. U. Fano, Colloq. Int. C.N.R.S. 273 (1977) 127; A.F. Starace, G.L. Webster, Phys. Rev. A 19 (1979) 1629-1640; C.V. Clark, K.T. Lu, A.F. Starace, in: H.G. Beyer, H. Kleinpoppen (Eds.), Progress in Atomic Spectroscopy, Part C, Plenum, New York, 1984, pp. 247-320; U. Fano, A.R.P. Rau, Atomic Collisions and Spectra, Academic Press, Florida, 1986. M.G. Dimova, M.S. Kaschiev, S.I. Vinitsky, J. Phys. B 38 (2005) 2337-2352; O. Chuluunbaatar, A.A. Gusev, V.L. Derbov, M.S. Kaschiev, V.V. Serov, T.V. Tupikova, S.I. Vinitsky, Proc. SPIE 6537 (2007) 653706-1-18. M.J. Seaton, Rep. Prog. Phys. 46 (1983) 167-257. M. Gailitis, J. Phys. B 9 (1976) 843-854; J. Macek, Phys. Rev. A 30 (1984) 1277-1278; S.I. Vinitsky, V.P. Gerdt, A.A. Gusev, M.S. Kaschiev, V.A. Rostovtsev, V.N. Samoylov, T.V. Tupikova, O. Chuluunbaatar, Programming and Computer Software 33 (2007) 105-116. H. Friedrich, Theoretical Atomic Physics, Springer, New York, 1991. R.J. Damburg, R.Kh. Propin, J. Phys. B 1 (1968) 681-691; J.D. Power, Phil. Trans. Roy. Soc. London A 274 (1973) 663-702. O. Chuluunbaatar, A.A. Gusev, A.G. Abrashkevich, A. Amaya-Tapia, M.S. Kaschiev, S.Y. Larsen, S.I. Vinitsky, Comput. Phys. Comm. 177 (2007) 649-675.
Fluid-structure finite-element vibrational analysis
NASA Technical Reports Server (NTRS)
Feng, G. C.; Kiefling, L.
1974-01-01
A fluid finite element has been developed for a quasi-compressible fluid. Both kinetic and potential energy are expressed as functions of nodal displacements. Thus, the formulation is similar to that used for structural elements, with the only differences being that the fluid can possess gravitational potential, and the constitutive equations for fluid contain no shear coefficients. Using this approach, structural and fluid elements can be used interchangeably in existing efficient sparse-matrix structural computer programs such as SPAR. The theoretical development of the element formulations and the relationships of the local and global coordinates are shown. Solutions of fluid slosh, liquid compressibility, and coupled fluid-shell oscillation problems which were completed using a temporary digital computer program are shown. The frequency correlation of the solutions with classical theory is excellent.
Instruction-matrix-based genetic programming.
Li, Gang; Wang, Jin Feng; Lee, Kin Hong; Leung, Kwong-Sak
2008-08-01
In genetic programming (GP), evolving tree nodes separately would reduce the huge solution space. However, tree nodes are highly interdependent with respect to their fitness. In this paper, we propose a new GP framework, namely, instruction-matrix (IM)-based GP (IMGP), to handle their interactions. IMGP maintains an IM to evolve tree nodes and subtrees separately. IMGP extracts program trees from an IM and updates the IM with the information of the extracted program trees. As the IM actually keeps most of the information of the schemata of GP and evolves the schemata directly, IMGP is effective and efficient. Our experimental results on benchmark problems have verified that IMGP is not only better than those of canonical GP in terms of the qualities of the solutions and the number of program evaluations, but they are also better than some of the related GP algorithms. IMGP can also be used to evolve programs for classification problems. The classifiers obtained have higher classification accuracies than four other GP classification algorithms on four benchmark classification problems. The testing errors are also comparable to or better than those obtained with well-known classifiers. Furthermore, an extended version, called condition matrix for rule learning, has been used successfully to handle multiclass classification problems.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1978-01-01
The paper describes the split-Cholesky strategy for banded matrices arising from the large systems of equations in certain fluid mechanics problems. The basic idea is that for a banded matrix the computation can be carried out in pieces, with only a small portion of the matrix residing in core. Mesh considerations are discussed by demonstrating the manner in which the assembly of finite element equations proceeds for linear trial functions on a triangular mesh. The FORTRAN code which implements the out-of-core decomposition strategy for banded symmetric positive definite matrices (mass matrices) of a coupled initial value problem is given.
On Matrices, Automata, and Double Counting
NASA Astrophysics Data System (ADS)
Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
Exploring Deep Learning and Sparse Matrix Format Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Liao, C.; Shen, X.
We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less
Effects of Interspersed Brief Problems on Students' Endurance at Completing Math Work
ERIC Educational Resources Information Center
Montarello, Staci; Martens, Brian K.
2005-01-01
An alternating treatments design was used to compare the effects of baseline, interspersed brief problems, and interspersed brief problems plus token reinforcement on students' endurance while completing math worksheets. By pairing the completion of brief problems with token reinforcement, the role of problem completion as a conditioned reinforcer…
A robust method of computing finite difference coefficients based on Vandermonde matrix
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin
2018-05-01
When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.
NASA Astrophysics Data System (ADS)
Wu, Sheng-Jhih; Chu, Moody T.
2017-08-01
An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.
Simple Derivation of the Lindblad Equation
ERIC Educational Resources Information Center
Pearle, Philip
2012-01-01
The Lindblad equation is an evolution equation for the density matrix in quantum theory. It is the general linear, Markovian, form which ensures that the density matrix is Hermitian, trace 1, positive and completely positive. Some elementary examples of the Lindblad equation are given. The derivation of the Lindblad equation presented here is…
Wu, Xiao-Ting; Mei, May Lei; Li, Quan-Li; Cao, Chris Ying; Chen, Jia-Long; Xia, Rong; Zhang, Zhi-Hong; Chu, Chun Hung
2015-01-01
This in vitro study aimed to accelerate the remineralization of a completely demineralized dentine collagen block in order to regenerate the dentinal microstructure of calcified collagen fibrils by a novel electric field-aided biomimetic mineralization system in the absence of non-collagenous proteins. Completely demineralized human dentine slices were prepared using ethylene diamine tetraacetic acid (EDTA) and treated with guanidine hydrochloride to extract the bound non-collagenous proteins. The completely demineralized dentine collagen blocks were then remineralized in a calcium chloride agarose hydrogel and a sodium hydrogen phosphate and fluoride agarose hydrogel. This process was accelerated by subjecting the hydrogels to electrophoresis at 20 mA for 4 and 12 h. X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and transmission electron microscopy (TEM) were used to evaluate the resultant calcification of the dentin collagen matrix. SEM indicated that mineral particles were precipitated on the intertubular dentin collagen matrix; these densely packed crystals mimicked the structure of the original mineralized dentin. However, the dentinal tubules were not occluded by the mineral crystals. XRD and EDX both confirmed that the deposited crystals were fluorinated hydroxyapatite. TEM revealed the existence of intrafibrillar and interfibrillar mineralization of the collagen fibrils. A novel electric field-aided biomimetic mineralization system was successfully developed to remineralize a completely demineralized dentine collagen matrix in the absence of non-collagenous proteins. This study developed an accelerated biomimetic mineralization system which can be a potential protocol for the biomineralization of dentinal defects. PMID:28793685
Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less
Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less
The anti-MMP activity of benzalkonium chloride
Tezvergil-Mutluay, Arzu; Mutluay, M. Murat; Gu, Li-sha; Zhang, Kai; Agee, Kelli A.; Carvalho, Ricardo M.; Manso, Adriana; Carrilho, Marcela; Tay, Franklin R.; Breschi, Lorenzo; Suh, Byoung-In; Pashley, David H.
2013-01-01
SUMMARY Objective This study evaluated the ability of benzalkonium chloride (BAC) to bind to dentine and to inhibit soluble recombinant MMPs and bound dentine matrix metalloproteinases (MMPs). Methods Dentine powder was prepared from extracted human molars. Half was left mineralized; the other half was completely demineralized. The binding of BAC to dentine powder was followed by measuring changes in the supernatant concentration using UV spectrometry. The inhibitory effects of BAC on rhMMP-2, -8 and -9 were followed using a commercially available in vitro proteolytic assay. Matrix-bound endogenous MMP-activity was evaluated in completely demineralized beams. Each beam was either dipped into BAC and then dropped into 1 mL of a complete medium (CM) or they were placed in 1 mL of CM containing BAC for 30 d. After 30 d, changes in the dry mass of the beams or in the hydroxyproline (HYP) content of hydrolyzates of the media were quantitated as indirect measures of matrix collagen hydrolysis by MMPs. Results Demineralized dentine powder took up 10-times more BAC than did mineralized powder. Water rinsing removed about 50% of the bound BAC, while rinsing with 0.5 M NaCl removed more than 90% of the bound BAC. BAC concentrations 0.5 wt% produced 100% inhibition of soluble recombinant MMP-2, -8 or -9, and inhibited matrix-bound MMPs between 55-66% when measured as mass loss or 76-81% when measured as solubilization of collagen peptide fragments. Conclusions BAC is effective at inhibiting both soluble recombinant MMPs and matrix-bound dentine MMPs in the absence of resins. PMID:20951183
NASA Astrophysics Data System (ADS)
MacDonald, R.; Savina, M. E.
2003-12-01
One approach to curriculum review and development is to construct a matrix of the desired skills versus courses in the departmental curriculum. The matrix approach requires faculty to articulate their goals, identify specific skills, and assess where in the curriculum students will learn and practice these skills and where there are major skills gaps. Faculty members in the Geology Department at Carleton College developed a matrix of skills covered in geology courses with the following objectives: 1) Geology majors should begin their "senior integrative exercise" having practiced multiple times all of the formal steps in the research process (recognizing problems, writing proposals, carrying out a project, reporting a project in several ways); 2) Geology majors should learn and practice a variety of professional and life skills life (e.g. computer skills, field skills, lab skills, and interpretive skills).The matrix was used to identify where in the curriculum various research methods and skills were addressed and to map potential student experiences to the objectives. In Carleton's non-hierarchical curriculum, the matrix was used to verify that students have many opportunities to practice research and life skills regardless of the path they take to completion of the major. In William and Mary's more structured curriculum, the matrix was used to ensure that skills build upon each other from course to course. Faculty members in the Geology Department at the College of William and Mary first used this approach to focus on teaching quantitative skills across the geology curriculum, and later used it in terms of teaching research, communication, and information literacy skills. After articulating goals and skills, faculty members in both departments developed more specific skill lists within each category of skills, then described the current assignments and activities in each course relative to the specific components of the matrix and discussed whether to add assignment or activities. We have found that much conversation among faculty and change within courses happens simply as a result of compiling the matrix. One effect of the use of the matrix is that faculty in the department know fairly specifically what skills students are learning and practicing in their other geology courses. Moreover, some faculty members are better suited by background or inclination to teach certain sets of skills. This coordinated approach avoids unnecessary duplication and allows faculty to build on skills and topics developed in previous courses. The matrix can also be used as a planning tool to identify gaps in the curriculum. In our experience, the skills matrix is a powerful organizational and communication tool. The skills matrix is a representation of what the department believes actually happens in the curriculum. Thus, development of a skills matrix provides a basis for departmental discussions of student learning goals and objectives as well as for describing the existing curriculum. The matrix is also a graphic representation, to college administrators and outside evaluators, of the "intentionality" of an entire curriculum, going beyond single courses and their syllabi. It can be used effectively to engage administration in discussions of departmental planning and needs analysis.
Simple Approach to Renormalize the Cabibbo-Kobayashi-Maskawa Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kniehl, Bernd A.; Sirlin, Alberto
2006-12-01
We present an on-shell scheme to renormalize the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It is based on a novel procedure to separate the external-leg mixing corrections into gauge-independent self-mass and gauge-dependent wave function renormalization contributions, and to implement the on-shell renormalization of the former with nondiagonal mass counterterm matrices. Diagonalization of the complete mass matrix leads to an explicit CKM counterterm matrix, which automatically satisfies all the following important properties: it is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are nonsingular in the limit in which any two fermions become mass degenerate.
NASA Astrophysics Data System (ADS)
Lin, Yongping; Zhang, Xiyang; He, Youwu; Cai, Jianyong; Li, Hui
2018-02-01
The Jones matrix and the Mueller matrix are main tools to study polarization devices. The Mueller matrix can also be used for biological tissue research to get complete tissue properties, while the commercial optical coherence tomography system does not give relevant analysis function. Based on the LabVIEW, a near real time display method of Mueller matrix image of biological tissue is developed and it gives the corresponding phase retardant image simultaneously. A quarter-wave plate was placed at 45 in the sample arm. Experimental results of the two orthogonal channels show that the phase retardance based on incident light vector fixed mode and the Mueller matrix based on incident light vector dynamic mode can provide an effective analysis method of the existing system.
2016-05-11
AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386
Computation of the soft anomalous dimension matrix in coordinate space
NASA Astrophysics Data System (ADS)
Mitov, Alexander; Sterman, George; Sung, Ilmo
2010-08-01
We complete the coordinate space calculation of the three-parton correlation in the two-loop massive soft anomalous dimension matrix. The full answer agrees with the result found previously by a different approach. The coordinate space treatment of renormalized two-loop gluon exchange diagrams exhibits their color symmetries in a transparent fashion. We compare coordinate space calculations of the soft anomalous dimension matrix with massive and massless eikonal lines and examine its nonuniform limit at absolute threshold.
Role of polysaccharides in Pseudomonas aeruginosa biofilm development
Ryder, Cynthia; Byrd, Matthew; Wozniak, Daniel J.
2008-01-01
During the past decade, there has been a renewed interest in using P. aeruginosa as a model system for biofilm development and pathogenesis. Since the biofilm matrix represents a critical interface between the bacterium and the host or its environment, considerable effort has been expended to acquire a more complete understanding of the matrix composition. Here, we focus on recent developments regarding the roles of alginate, Psl, and Pel polysaccharides in the biofilm matrix. PMID:17981495
Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent
2017-01-01
Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212
The Risk Assessment in the 21st Century (RISK21): Roadmap and Matrix
The RISK21 integrated evaluation strategy is a problem formulation-based exposure-driven risk assessment roadmap that takes advantage of existing information to graphically represent the intersection of exposure and toxicity data on a highly visual matrix. This paper describes i...
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
NASA Astrophysics Data System (ADS)
Klappenecker, Andreas; Rötteler, Martin; Shparlinski, Igor E.; Winterhof, Arne
2005-08-01
We address the problem of constructing positive operator-valued measures (POVMs) in finite dimension n consisting of n2 operators of rank one which have an inner product close to uniform. This is motivated by the related question of constructing symmetric informationally complete POVMs (SIC-POVMs) for which the inner products are perfectly uniform. However, SIC-POVMs are notoriously hard to construct and, despite some success of constructing them numerically, there is no analytic construction known. We present two constructions of approximate versions of SIC-POVMs, where a small deviation from uniformity of the inner products is allowed. The first construction is based on selecting vectors from a maximal collection of mutually unbiased bases and works whenever the dimension of the system is a prime power. The second construction is based on perturbing the matrix elements of a subset of mutually unbiased bases. Moreover, we construct vector systems in Cn which are almost orthogonal and which might turn out to be useful for quantum computation. Our constructions are based on results of analytic number theory.
Faleeva, T G; Ivanov, I N; Mishin, E S; Vnukova, N V; Kornienko, I V
2016-01-01
The objective of the present experimental molecular-genetic study of DNA contained in of human fingerprints was to establish the relationship between the reference genetic profiles and the genotypes of the individuals leaving their fingerprints on a smooth metal object. The biological material for the purpose of the investigation was sampled at different time intervals. The were taken using a scotch tape and used to obtain the complete genetic profile immediately after the fingerprints had been left as well as within the next 24 hours and one week. It proved impossible to identify the complete genetic profile one month after the fingerprints had been left. The alleles not typical for reference samples were identified within one week after swabbing the material from the metal surface. The results of the sudy can be explained by the decrease of the concentration of the initial DNA-matrix in the samples due to its degradation in the course of time. It is concluded that the parallel genetic analysis is needed if reliable evidence of identity of the profiles of interest or its absence is to be obtained.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Kavros, Steven J; Dutra, Timothy; Gonzalez-Cruz, Renier; Liden, Brock; Marcus, Belinda; McGuire, James; Nazario-Guirau, Luis
2014-08-01
The objective of this multicenter study was to prospectively evaluate the healing outcomes of chronic diabetic foot ulcers (DFUs) treated with PriMatrix (TEI Biosciences, Boston, Massachusetts), a fetal bovine acellular dermal matrix. Inclusion criteria required the subjects to have a chronic DFU that ranged in area from 1 to 20 cm² and failed to heal more than 30% during a 2-week screening period when treated with moist wound therapy. For qualifying subjects, PriMatrix was secured into a clean, sharply debrided wound; dressings were applied to maintain a moist wound environment, and the DFU was pressure off-loaded. Wound area measurements were taken weekly for up to 12 weeks, and PriMatrix was reapplied at the discretion of the treating physician. A total of 55 subjects were enrolled at 9 US centers with 46 subjects progressing to study completion. Ulcers had been in existence for an average of 286 days, and initial mean ulcer area was 4.34 cm². Of the subjects completing the study, 76% healed by 12 weeks with a mean time to healing of 53.1 ± 21.9 days. The mean number of applications for these healed wounds was 2.0 ± 1.4, with 59.1% healing with a single application of PriMatrix and 22.9% healing with 2 applications. For subjects not healed by 12 weeks, the average wound area reduction was 71.4%. The results of this multicenter prospective study demonstrate that the use of PriMatrix integrated with standard-of-care therapy is a successful treatment regimen to heal DFUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedegård, Erik Donovan, E-mail: erik.hedegard@phys.chem.ethz.ch; Knecht, Stefan; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch
2015-06-14
We present a new hybrid multiconfigurational method based on the concept of range-separation that combines the density matrix renormalization group approach with density functional theory. This new method is designed for the simultaneous description of dynamical and static electron-correlation effects in multiconfigurational electronic structure problems.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Kolditz, Olaf
2015-07-01
This work reports numerical stability conditions in two-dimensional solute transport simulations including discrete fractures surrounded by an impermeable rock matrix. We use an advective-dispersive problem described in Tang et al. (1981) and examine the stability of the Crank-Nicolson Galerkin finite element method (CN-GFEM). The stability conditions are analyzed in terms of the spatial discretization length perpendicular to the fracture, the flow velocity, the diffusion coefficient, the matrix porosity, the fracture aperture, and the fracture longitudinal dispersivity. In addition, we verify applicability of the recently developed finite element method-flux corrected transport (FEM-FCT) method by Kuzmin () to suppress oscillations in the hybrid system, with a comparison to the commonly utilized Streamline Upwinding/Petrov-Galerkin (SUPG) method. Major findings of this study are (1) the mesh von Neumann number (Fo) ≥ 0.373 must be satisfied to avoid undershooting in the matrix, (2) in addition to an upper bound, the Courant number also has a lower bound in the fracture in cases of low dispersivity, and (3) the FEM-FCT method can effectively suppress the oscillations in both the fracture and the matrix. The results imply that, in cases of low dispersivity, prerefinement of a numerical mesh is not sufficient to avoid the instability in the hybrid system if a problem involves evolutionary flow fields and dynamic material parameters. Applying the FEM-FCT method to such problems is recommended if negative concentrations cannot be tolerated and computing time is not a strong issue.
Comparison of eigensolvers for symmetric band matrices.
Moldaschl, Michael; Gansterer, Wilfried N
2014-09-15
We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.
25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix
Code of Federal Regulations, 2010 CFR
2010-04-01
...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate route 1 Severe X Moderate Minimal No accidents. Years since last IRR construction project completed... elements Addresses 1 element. 1 National Highway Traffic Safety Board standards. 2 Total funds requested...
Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.
ERIC Educational Resources Information Center
Thomas, Neal; Gan, Nianci
1997-01-01
Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... individual. The textile fragment, beads, nails, and metal fragments were enveloped inside the soil matrix... identified. The one associated funerary object is a soil matrix, which includes within it a textile fragment, trade beads, nail fragments, and metal fragments. In 2008, staff at the Madeline Island Museum located a...
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
A model for predicting high-temperature fatigue failure of a W/Cu composite
NASA Technical Reports Server (NTRS)
Verrilli, M. J.; Kim, Y.-S.; Gabb, T. P.
1991-01-01
The material studied, a tungsten-fiber-reinforced, copper-matrix composite, is a candidate material for rocket nozzle liner applications. It was shown that at high temperatures, fatigue cracks initiate and propagate inside the copper matrix through a process of initiation, growth, and coalescence of grain boundary cavities. The ductile tungsten fibers neck and rupture locally after the surrounding matrix fails, and complete failure of the composite then ensues. A simple fatigue life prediction model is presented for the tungsten/copper composite system.
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
NASA Astrophysics Data System (ADS)
Burtyka, Filipp
2018-03-01
The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.
Hu, En-Liang; Kwok, James T
2015-09-01
Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.
Interstellar problems and matrix solutions
NASA Technical Reports Server (NTRS)
Allamandola, Louis J.
1987-01-01
The application of the matrix isolation technique to interstellar problems is described. Following a brief discussion of the interstellar medium (ISM), three areas are reviewed in which matrix experiments are particularly well suited to contribute the information which is sorely needed to further understanding of the ISM. The first involves the measurement of the spectroscopic properties of reactive species. The second is the determination of reaction rates and the elucidation of reaction pathways involving atoms, radicals, and ions which are likely to interact on grain surfaces and in grain mantles. The third entails the determiantion of the spectroscopic, photochemical, and photophysical properties of interstellar and cometary ice analogs. Significant, but limited, progress has been made in these three areas, and a tremendous amount of work is required to fully address the variety of unique chemical and spectroscopic questions posed by the astronomical observations.
Calculation of normal modes of the closed waveguides in general vector case
NASA Astrophysics Data System (ADS)
Malykh, M. D.; Sevastianov, L. A.; Tiutiunnik, A. A.
2018-04-01
The article is devoted to the calculation of normal modes of the closed waveguides with an arbitrary filling ɛ, μ in the system of computer algebra Sage. Maxwell equations in the cylinder are reduced to the system of two bounded Helmholtz equations, the notion of weak solution of this system is given and then this system is investigated as a system of ordinary differential equations. The normal modes of this system are an eigenvectors of a matrix pencil. We suggest to calculate the matrix elements approximately and to truncate the matrix by usual way but further to solve the truncated eigenvalue problem exactly in the field of algebraic numbers. This approach allows to keep the symmetry of the initial problem and in particular the multiplicity of the eigenvalues. In the work would be presented some results of calculations.
Santos, Antonio; Goumenos, George; Pascual, Andrés; Nart, Jose
2011-02-01
Acellular dermal matrix grafts have become a good alternative to autogenous soft tissue grafts in root coverage. Until now, the literature has reported short- or medium-term data regarding the stability of the gingival margin after the use of acellular dermal matrix on root coverage. The aim of this article is to describe a case report with 10 years of evolution with creeping attachment that developed bucally on a moderate recession of a maxillary canine with an old composite restoration subsequent to an acellular dermal matrix. Long-term creeping attachment and complete root coverage on a restored tooth treated with acellular dermal matrix has not been previously reported in the dental literature.
Core filaments of the nuclear matrix
1990-01-01
The nuclear matrix is concealed by a much larger mass of chromatin, which can be removed selectively by digesting nuclei with DNase I followed by elution of chromatin with 0.25 M ammonium sulfate. This mild procedure removes chromatin almost completely and preserves nuclear matrix morphology. The complete nuclear matrix consists of a nuclear lamina with an interior matrix composed of thick, polymorphic fibers and large masses that resemble remnant nucleoli. Further extraction of the nuclear matrices of HeLa or MCF-7 cells with 2 M sodium chloride uncovered a network of core filaments. A few dark masses remained enmeshed in the filament network and may be remnants of the nuclear matrix thick fibers and nucleoli. The highly branched core filaments had diameters of 9 and 13 nm measured relative to the intermediate filaments. They may serve as the core structure around which the matrix is constructed. The core filaments retained 70% of nuclear RNA. This RNA consisted both of ribosomal RNA precursors and of very high molecular weight hnRNA with a modal size of 20 kb. Treatment with RNase A removed the core filaments. When 2 M sodium chloride was used directly to remove chromatin after DNase I digestion without a preceding 0.25 M ammonium sulfate extraction, the core filaments were not revealed. Instead, the nuclear interior was filled with amorphous masses that may cover the filaments. This reflected a requirement for a stepwise increase in ionic strength because gradual addition of sodium chloride to a final concentration of 2 M without an 0.25 M ammonium sulfate extraction uncovered core filaments. PMID:2307700
A major protein component of the Bacillus subtilis biofilm matrix.
Branda, Steven S; Chu, Frances; Kearns, Daniel B; Losick, Richard; Kolter, Roberto
2006-02-01
Microbes construct structurally complex multicellular communities (biofilms) through production of an extracellular matrix. Here we present evidence from scanning electron microscopy showing that a wild strain of the Gram positive bacterium Bacillus subtilis builds such a matrix. Genetic, biochemical and cytological evidence indicates that the matrix is composed predominantly of a protein component, TasA, and an exopolysaccharide component. The absence of TasA or the exopolysaccharide resulted in a residual matrix, while the absence of both components led to complete failure to form complex multicellular communities. Extracellular complementation experiments revealed that a functional matrix can be assembled even when TasA and the exopolysaccharide are produced by different cells, reinforcing the view that the components contribute to matrix formation in an extracellular manner. Having defined the major components of the biofilm matrix and the control of their synthesis by the global regulator SinR, we present a working model for how B. subtilis switches between nomadic and sedentary lifestyles.
NASA Astrophysics Data System (ADS)
Yeh, Gour-Tsyh (George); Siegel, Malcolm D.; Li, Ming-Hsu
2001-02-01
The couplings among chemical reaction rates, advective and diffusive transport in fractured media or soils, and changes in hydraulic properties due to precipitation and dissolution within fractures and in rock matrix are important for both nuclear waste disposal and remediation of contaminated sites. This paper describes the development and application of LEHGC2.0, a mechanistically based numerical model for simulation of coupled fluid flow and reactive chemical transport, including both fast and slow reactions in variably saturated media. Theoretical bases and numerical implementations are summarized, and two example problems are demonstrated. The first example deals with the effect of precipitation/dissolution on fluid flow and matrix diffusion in a two-dimensional fractured media. Because of the precipitation and decreased diffusion of solute from the fracture into the matrix, retardation in the fractured medium is not as large as the case wherein interactions between chemical reactions and transport are not considered. The second example focuses on a complicated but realistic advective-dispersive-reactive transport problem. This example exemplifies the need for innovative numerical algorithms to solve problems involving stiff geochemical reactions.
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising
Guo, Muran; Chen, Tao; Wang, Ben
2017-01-01
Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
Extraction and quantitative analysis of iodine in solid and solution matrixes.
Brown, Christopher F; Geiszler, Keith N; Vickerman, Tanya S
2005-11-01
129I is a contaminant of interest in the vadose zone and groundwater at numerous federal and privately owned facilities. Several techniques have been utilized to extract iodine from solid matrixes; however, all of them rely on two fundamental approaches: liquid extraction or chemical/heat-facilitated volatilization. While these methods are typically chosen for their ease of implementation, they do not totally dissolve the solid. We defined a method that produces complete solid dissolution and conducted laboratory tests to assess its efficacy to extract iodine from solid matrixes. Testing consisted of potassium nitrate/potassium hydroxide fusion of the sample, followed by sample dissolution in a mixture of sulfuric acid and sodium bisulfite. The fusion extraction method resulted in complete sample dissolution of all solid matrixes tested. Quantitative analysis of 127I and 129I via inductively coupled plasma mass spectrometry showed better than +/-10% accuracy for certified reference standards, with the linear operating range extending more than 3 orders of magnitude (0.005-5 microg/L). Extraction and analysis of four replicates of standard reference material containing 5 microg/g 127I resulted in an average recovery of 98% with a relative deviation of 6%. This simple and cost-effective technique can be applied to solid samples of varying matrixes with little or no adaptation.
NASA Astrophysics Data System (ADS)
Rajaram, H.; Arshadi, M.
2016-12-01
In-situ chemical oxidation (ISCO) is an effective strategy for remediation of DNAPL contamination in fractured rock. During ISCO, an oxidant (e.g. permanganate) is typically injected through fractures and is consumed by bimolecular reactions with DNAPLs such as TCE and natural organic matter in the fracture and the adjacent rock matrix. Under these conditions, moving reaction fronts form and propagate along the fracture and into the rock matrix. The propagation of these reaction fronts is strongly influenced by the heterogeneity/discontinuity across the fracture-matrix interface (advective transport dominates in the fractures, while diffusive transport dominates in the rock matrix). We present analytical solutions for the concentrations of the oxidant, TCE and natural organic matter; and the propagation of the reaction fronts in a fracture-matrix system. Our approximate analytical solutions assume advection and reaction dominate over diffusion/dispersion in the fracture and neglect the latter. Diffusion and reaction with both TCE and immobile natural organic matter in the rock matrix are considered. The behavior of the reaction-diffusion equations in the rock matrix is posed as a Stefan problem where the diffusing oxidant reacts with both diffusing (TCE) and immobile (natural organic matter) reductants. Our analytical solutions establish that the reaction fronts propagate diffusively (i.e. as the square root of time) in both the matrix and the fracture. Our analytical solutions agree very well with numerical simulations for the case of uniform advection in the fracture. We also present extensions of our analytical solutions to non-uniform flows in the fracture by invoking a travel-time transformation. The non-uniform flow solutions are relevant to field applications of ISCO. The approximate analytical solutions are relevant to a broad class of reactive transport problems in fracture-matrix systems where moving reaction fronts occur.
Understanding the Evolution and Stability of the G-Matrix
Arnold, Stevan J.; Bürger, Reinhard; Hohenlohe, Paul A.; Ajie, Beverley C.; Jones, Adam G.
2011-01-01
The G-matrix summarizes the inheritance of multiple, phenotypic traits. The stability and evolution of this matrix are important issues because they affect our ability to predict how the phenotypic traits evolve by selection and drift. Despite the centrality of these issues, comparative, experimental, and analytical approaches to understanding the stability and evolution of the G-matrix have met with limited success. Nevertheless, empirical studies often find that certain structural features of the matrix are remarkably constant, suggesting that persistent selection regimes or other factors promote stability. On the theoretical side, no one has been able to derive equations that would relate stability of the G-matrix to selection regimes, population size, migration, or to the details of genetic architecture. Recent simulation studies of evolving G-matrices offer solutions to some of these problems, as well as a deeper, synthetic understanding of both the G-matrix and adaptive radiations. PMID:18973631
Optimized Projection Matrix for Compressive Sensing
NASA Astrophysics Data System (ADS)
Xu, Jianping; Pi, Yiming; Cao, Zongjie
2010-12-01
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
Benchmark matrix and guide: Part III.
1992-01-01
The final article in the "Benchmark Matrix and Guide" series developed by Headquarters Air Force Logistics Command completes the discussion of the last three categories that are essential ingredients of a successful total quality management (TQM) program. Detailed behavioral objectives are listed in the areas of recognition, process improvement, and customer focus. These vertical categories are meant to be applied to the levels of the matrix that define the progressive stages of the TQM: business as usual, initiation, implementation, expansion, and integration. By charting the horizontal progress level and the vertical TQM category, the quality management professional can evaluate the current state of TQM in any given organization. As each category is completed, new goals can be defined in order to advance to a higher level. The benchmarking process is integral to quality improvement efforts because it focuses on the highest possible standards to evaluate quality programs.
A state interaction spin-orbit coupling density matrix renormalization group method
NASA Astrophysics Data System (ADS)
Sayfutyarova, Elvira R.; Chan, Garnet Kin-Lic
2016-06-01
We describe a state interaction spin-orbit (SISO) coupling method using density matrix renormalization group (DMRG) wavefunctions and the spin-orbit mean-field (SOMF) operator. We implement our DMRG-SISO scheme using a spin-adapted algorithm that computes transition density matrices between arbitrary matrix product states. To demonstrate the potential of the DMRG-SISO scheme we present accurate benchmark calculations for the zero-field splitting of the copper and gold atoms, comparing to earlier complete active space self-consistent-field and second-order complete active space perturbation theory results in the same basis. We also compute the effects of spin-orbit coupling on the spin-ladder of the iron-sulfur dimer complex [Fe2S2(SCH3)4]3-, determining the splitting of the lowest quartet and sextet states. We find that the magnitude of the zero-field splitting for the higher quartet and sextet states approaches a significant fraction of the Heisenberg exchange parameter.
Hunting for new physics with unitarity boomerangs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frampton, Paul H.; Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa, Chiba 277-8568; He Xiaogang
2010-07-01
The standard model of particle theory will be rigorously tested by upcoming precision data on flavor mixing. Although the unitarity triangles (UTs) carry information about the Kobayashi-Maskawa (KM) quark mixing matrix, it explicitly contains just three parameters which is one short to completely fix the KM matrix. We have recently shown that the unitarity boomerangs (UBs) formed using two UTs, with a common inner angle, can completely determine the KM matrix and, therefore, better represents quark mixing. Out of the total 18 possible UBs, there is only one that does not involve very small angles and is the ideal onemore » for practical uses. Although the UBs have different areas, there is, however, an invariant quantity, for all UBs, which is equal to a quarter of the Jarlskog parameter J squared. Hunting for new physics, with a unitarity boomerang, can reveal more information, than just using a UTs.« less
Karr, Jeffrey C
2011-03-01
The goal of this study was to review clinical experience in treating diabetic and venous stasis wounds with Apligraf or PriMatrix. A total of 40 diabetic foot ulcers and 28 venous stasis ulcers were treated with either PriMatrix or Apligraf for number of days open and for number of days for complete healing between the 2 treatments. Although both treatments were highly effective, the study results of 68 ulcers in 48 patients demonstrated that patients treated with PriMatrix healed faster than patients treated with Apligraf despite larger wound sizes.
Ozaki, Yasunori; Aoki, Ryosuke; Kimura, Toshitaka; Takashima, Youichi; Yamada, Tomohiro
2016-08-01
The goal of this study is to propose a data driven approach method to characterize muscular activities of complex actions in sports such as golf from a lot of EMG channels. Two problems occur in a many channel measurement. The first problem is that it takes a lot of time to check the many channel data because of combinatorial explosion. The second problem is that it is difficult to understand muscle activities related with complex actions. To solve these problems, we propose an analysis method of multi EMG channels using Non-negative Matrix Factorization and adopt the method to driver swings in golf. We measured 26 EMG channels about 4 professional coaches of golf. The results show that the proposed method detected 9 muscle synergies and the activation of each synergy were mostly fitted by sigmoid curve (R2=0.85).
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Chakroun, Mahmoud; Gogu, Grigore; Pacaud, Thomas; Thirion, François
2014-09-01
This study proposes an eco-innovative design process taking into consideration quality and environmental aspects in prioritizing and solving technical engineering problems. This approach provides a synergy between the Life Cycle Assessment (LCA), the nonquality matrix, the Theory of Inventive Problem Solving (TRIZ), morphological analysis and the Analytical Hierarchy Process (AHP). In the sequence of these tools, LCA assesses the environmental impacts generated by the system. Then, for a better consideration of environmental aspects, a new tool is developed, the non-quality matrix, which defines the problem to be solved first from an environmental point of view. The TRIZ method allows the generation of new concepts and contradiction resolution. Then, the morphological analysis offers the possibility of extending the search space of solutions in a design problem in a systematic way. Finally, the AHP identifies the promising solution(s) by providing a clear logic for the choice made. Their usefulness has been demonstrated through their application to a case study involving a centrifugal spreader with spinning discs.
Xia, J.; Miller, R.D.; Xu, Y.
2008-01-01
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.
A radial basis function Galerkin method for inhomogeneous nonlocal diffusion
Lehoucq, Richard B.; Rowe, Stephen T.
2016-02-01
We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.
Brown, James; Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
Calculation of transmission probability by solving an eigenvalue problem
NASA Astrophysics Data System (ADS)
Bubin, Sergiy; Varga, Kálmán
2010-11-01
The electron transmission probability in nanodevices is calculated by solving an eigenvalue problem. The eigenvalues are the transmission probabilities and the number of nonzero eigenvalues is equal to the number of open quantum transmission eigenchannels. The number of open eigenchannels is typically a few dozen at most, thus the computational cost amounts to the calculation of a few outer eigenvalues of a complex Hermitian matrix (the transmission matrix). The method is implemented on a real space grid basis providing an alternative to localized atomic orbital based quantum transport calculations. Numerical examples are presented to illustrate the efficiency of the method.
Application of a Modal Approach in Solving the Static Stability Problem for Electric Power Systems
NASA Astrophysics Data System (ADS)
Sharov, J. V.
2017-12-01
Application of a modal approach in solving the static stability problem for power systems is examined. It is proposed to use the matrix exponent norm as a generalized transition function of the power system disturbed motion. Based on the concept of a stability radius and the pseudospectrum of Jacobian matrix, the necessary and sufficient conditions for existence of the static margins were determined. The capabilities and advantages of the modal approach in designing centralized or distributed control and the prospects for the analysis of nonlinear oscillations and rendering the dynamic stability are demonstrated.
The use of complete sets of orthogonal operators in spectroscopic studies
NASA Astrophysics Data System (ADS)
Raassen, A. J. J.; Uylings, P. H. M.
1996-01-01
Complete sets of orthogonal operators are used to calculate eigenvalues and eigenvector compositions in complex spectra. The latter are used to transform the LS-transition matrix into realistic intermediate coupling transition probabilities. Calculated transition probabilities for some close lying levels in Ni V and Fe III illustrate the power of the complete orthogonal operator approach.
Dynamical analysis of continuous higher-order hopfield networks for combinatorial optimization.
Atencia, Miguel; Joya, Gonzalo; Sandoval, Francisco
2005-08-01
In this letter, the ability of higher-order Hopfield networks to solve combinatorial optimization problems is assessed by means of a rigorous analysis of their properties. The stability of the continuous network is almost completely clarified: (1) hyperbolic interior equilibria, which are unfeasible, are unstable; (2) the state cannot escape from the unitary hypercube; and (3) a Lyapunov function exists. Numerical methods used to implement the continuous equation on a computer should be designed with the aim of preserving these favorable properties. The case of nonhyperbolic fixed points, which occur when the Hessian of the target function is the null matrix, requires further study. We prove that these nonhyperbolic interior fixed points are unstable in networks with three neurons and order two. The conjecture that interior equilibria are unstable in the general case is left open.
Algebraic geometry and Bethe ansatz. Part I. The quotient ring for BAE
NASA Astrophysics Data System (ADS)
Jiang, Yunfeng; Zhang, Yang
2018-03-01
In this paper and upcoming ones, we initiate a systematic study of Bethe ansatz equations for integrable models by modern computational algebraic geometry. We show that algebraic geometry provides a natural mathematical language and powerful tools for understanding the structure of solution space of Bethe ansatz equations. In particular, we find novel efficient methods to count the number of solutions of Bethe ansatz equations based on Gröbner basis and quotient ring. We also develop analytical approach based on companion matrix to perform the sum of on-shell quantities over all physical solutions without solving Bethe ansatz equations explicitly. To demonstrate the power of our method, we revisit the completeness problem of Bethe ansatz of Heisenberg spin chain, and calculate the sum rules of OPE coefficients in planar N=4 super-Yang-Mills theory.
Biomimetic Mineralization on a Macroporous Cellulose-Based Matrix for Bone Regeneration
Petrauskaite, Odeta; Gomes, Pedro de Sousa; Fernandes, Maria Helena; Juodzbalys, Gintaras; Maminskas, Julius
2013-01-01
The aim of this study is to investigate the biomimetic mineralization on a cellulose-based porous matrix with an improved biological profile. The cellulose matrix was precalcified using three methods: (i) cellulose samples were treated with a solution of calcium chloride and diammonium hydrogen phosphate; (ii) the carboxymethylated cellulose matrix was stored in a saturated calcium hydroxide solution; (iii) the cellulose matrix was mixed with a calcium silicate solution in order to introduce silanol groups and to combine them with calcium ions. All the methods resulted in a mineralization of the cellulose surfaces after immersion in a simulated body fluid solution. Over a period of 14 days, the matrix was completely covered with hydroxyapatite crystals. Hydroxyapatite formation depended on functional groups on the matrix surface as well as on the precalcification method. The largest hydroxyapatite crystals were obtained on the carboxymethylated cellulose matrix treated with calcium hydroxide solution. The porous cellulose matrix was not cytotoxic, allowing the adhesion and proliferation of human osteoblastic cells. Comparatively, improved cell adhesion and growth rate were achieved on the mineralized cellulose matrices. PMID:24163816
A Screening Matrix for an Initial Line of Inquiry
ERIC Educational Resources Information Center
Nordness, Philip D.; Swain, Kristine D.; Haverkost, Ann
2012-01-01
The Screening for Understanding: Initial Line of Inquiry was designed to be used in conjunction with the child study team planning process for dealing with continuous problem behaviors prior to conducting a formal functional behavioral assessment. To conduct the initial line of inquiry a one-page reproducible screening matrix was used during child…
Aeroelastic analysis of a troposkien-type wind turbine blade
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1981-01-01
The linear aeroelastic equations for one curved blade of a vertical axis wind turbine in state vector form are presented. The method is based on a simple integrating matrix scheme together with the transfer matrix idea. The method is proposed as a convenient way of solving the associated eigenvalue problem for general support conditions.
Sparse matrix methods based on orthogonality and conjugacy
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1973-01-01
A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.
Factor Covariance Analysis in Subgroups.
ERIC Educational Resources Information Center
Pennell, Roger
The problem considered is that of an investigator sampling two or more correlation matrices and desiring to fit a model where a factor pattern matrix is assumed to be identical across samples and we need to estimate only the factor covariance matrix and the unique variance for each sample. A flexible, least squares solution is worked out and…
ERIC Educational Resources Information Center
Nash, David A.; And Others
1989-01-01
The application of a matrix organization to the dental school by superimposing the four dimensions of the college's mission (patient care, education, research, and faculty development) is described, the problems and advantages of the reorganization are examined, and changes facilitated by it are discussed. (MSE)
Multipole expansions and Fock symmetry of the hydrogen atom
NASA Astrophysics Data System (ADS)
Meremianin, A. V.; Rost, J.-M.
2006-10-01
The main difficulty in utilizing the O(4) symmetry of the hydrogen atom in practical calculations is the dependence of the Fock stereographic projection on energy. This is due to the fact that the wavefunctions of the states with different energies are proportional to the hyperspherical harmonics (HSH) corresponding to different points on the hypersphere. Thus, the calculation of the matrix elements reduces to the problem of re-expanding HSH in terms of HSH depending on different points on the hypersphere. We solve this problem by applying the technique of multipole expansions for four-dimensional HSH. As a result, we obtain the multipole expansions whose coefficients are the matrix elements of the boost operator taken between hydrogen wavefunctions (i.e., hydrogen form factors). The explicit expressions for those coefficients are derived. It is shown that the hydrogen matrix elements can be presented as derivatives of an elementary function. Such an operator representation is convenient for the derivation of recurrence relations connecting matrix elements between states corresponding to different values of the quantum numbers n and l.
Eigenvector dynamics: General theory and some applications
NASA Astrophysics Data System (ADS)
Allez, Romain; Bouchaud, Jean-Philippe
2012-10-01
We propose a general framework to study the stability of the subspace spanned by P consecutive eigenvectors of a generic symmetric matrix H0 when a small perturbation is added. This problem is relevant in various contexts, including quantum dissipation (H0 is then the Hamiltonian) and financial risk control (in which case H0 is the assets' return covariance matrix). We argue that the problem can be formulated in terms of the singular values of an overlap matrix, which allows one to define an overlap distance. We specialize our results for the case of a Gaussian orthogonal H0, for which the full spectrum of singular values can be explicitly computed. We also consider the case when H0 is a covariance matrix and illustrate the usefulness of our results using financial data. The special case where the top eigenvalue is much larger than all the other ones can be investigated in full detail. In particular, the dynamics of the angle made by the top eigenvector and its true direction defines an interesting class of random processes.
Spatial operator factorization and inversion of the manipulator mass matrix
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz-Delgado, Kenneth
1992-01-01
This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.
NASA Astrophysics Data System (ADS)
Daftardar-Gejji, Varsha; Jafari, Hossein
2005-01-01
Adomian decomposition method has been employed to obtain solutions of a system of fractional differential equations. Convergence of the method has been discussed with some illustrative examples. In particular, for the initial value problem: where A=[aij] is a real square matrix, the solution turns out to be , where E([alpha]1,...,[alpha]n),1 denotes multivariate Mittag-Leffler function defined for matrix arguments and Ai is the matrix having ith row as [ai1...ain], and all other entries are zero. Fractional oscillation and Bagley-Torvik equations are solved as illustrative examples.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
NASA Astrophysics Data System (ADS)
Ahn, Sung Hee; Hyeon, Taeghwan; Kim, Myung Soo; Moon, Jeong Hee
2017-09-01
In matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF), matrix-derived ions are routinely deflected away to avoid problems with ion detection. This, however, limits the use of a quantification method that utilizes the analyte-to-matrix ion abundance ratio. In this work, we will show that it is possible to measure this ratio by a minor instrumental modification of a simple form of MALDI-TOF. This involves detector gain switching. [Figure not available: see fulltext.
Addressing the computational cost of large EIT solutions.
Boyle, Alistair; Borsic, Andrea; Adler, Andy
2012-05-01
Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.
Fuzzy Mathematical Models To Remove Poverty Of Gypsies In Tamilnadu
NASA Astrophysics Data System (ADS)
Chandrasekaran, A. D.; Ramkumar, C.; Siva, E. P.; Balaji, N.
2018-04-01
In the society there are several poor people are living. One of the sympathetic poor people is gypsies. They are moving from one place to another place towards survive of life because of not having any permanent place to live. In this paper we have interviewed 895 gypsies in Tamilnadu using a linguistic questionnaire. As the problems faced by them to improve their life at large involve so much of feeling, uncertainties and unpredictabilitys. I felt that it deem fit to use fuzzy theory in general and fuzzy matrix in particular. Fuzzy matrix is the best suitable tool where the data is an unsupervised one. Further the fuzzy matrix is so powerful to identify the main development factor of gypsies.This paper has three sections. In section one the method of application of CEFD matrix. In section two, we describe the development factors of gypsies. In section three, we apply these factors to the CEFD matrix and derive our conclusions. Key words: RD matrix, AFD matrix, CEFD matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Luis; MartI, Jose M; Ibanez, Jose M
2010-05-01
We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, andmore » can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.« less
Example-Based Learning: Exploring the Use of Matrices and Problem Variability
ERIC Educational Resources Information Center
Hancock-Niemic, Mary A.; Lin, Lijia; Atkinson, Robert K.; Renkl, Alexander; Wittwer, Joerg
2016-01-01
The purpose of the study was to investigate the efficacy of using faded worked examples presented in matrices with problem structure variability to enhance learners' ability to recognize the underlying structure of the problems. Specifically, this study compared the effects of matrix-format versus linear-format faded worked examples combined with…
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
The Complexity of Scientific Literacy: The Development and Use of a Data Analysis Matrix
ERIC Educational Resources Information Center
Garthwaite, Kathryn; France, Bev; Ward, Gillian
2014-01-01
Data were gathered from 95 Year 10 students in a New Zealand secondary school to explore how the indicators of scientific literacy are expressed in student responses. These students completed an activity based around the two contexts of lighting and health. A matrix, which incorporated descriptive indicators, was developed to analyse the student…
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Recent results and persisting problems in modeling flow induced coalescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortelný, I., E-mail: fortelny@imc.cas.cz, E-mail: juza@imc.cas.cz; Jza, J., E-mail: fortelny@imc.cas.cz, E-mail: juza@imc.cas.cz
2014-05-15
The contribution summarizes recent results of description of the flow induced coalescence in immiscible polymer blends and addresses problems that call for which solving. The theory of coalescence based on the switch between equations for matrix drainage between spherical or deformed droplets provides a good agreement with more complicated modeling and available experimental data for probability, P{sub c}, that the collision of droplets will be followed by their fusion. A new equation for description of the matrix drainage between deformed droplets, applicable to the whole range of viscosity ratios, p, of the droplets and matrixes, is proposed. The theory facilitatesmore » to consider the effect of the matrix elasticity on coalescence. P{sub c} decreases with the matrix relaxation time but this decrease is not pronounced for relaxation times typical of most commercial polymers. Modeling of the flow induced coalescence in concentrated systems is needed for prediction of the dependence of coalescence rate on volume fraction of droplets. The effect of the droplet anisometry on P{sub c} should be studied for better understanding the coalescence in flow field with high and moderate deformation rates. A reliable description of coalescence in mixing and processing devices requires proper modeling of complex flow fields.« less
NASA Astrophysics Data System (ADS)
Longbiao, Li
2015-12-01
An analytical methodology has been developed to investigate the effect of fiber Poisson contraction on matrix multicracking evolution of fiber-reinforced ceramic-matrix composites (CMCs). The modified shear-lag model incorporated with the Coulomb friction law is adopted to solve the stress distribution in the interface slip region and intact region of the damaged composite. The critical matrix strain energy criterion which presupposes the existence of an ultimate or critical strain energy limit beyond which the matrix fails has been adopted to describe matrix multicracking of CMCs. As more energy is placed into the composite, matrix fractures and the interface debonding occurs to dissipate the extra energy. The interface debonded length under the process of matrix multicracking is obtained by treating the interface debonding as a particular crack propagation problem along the fiber/matrix interface. The effects of the interfacial frictional coefficient, fiber Poisson ratio, fiber volume fraction, interface debonded energy and cycle number on the interface debonding and matrix multicracking evolution have been analyzed. The theoretical results are compared with experimental data of unidirectional SiC/CAS, SiC/CAS-II and SiC/Borosilicate composites.
NASA Astrophysics Data System (ADS)
Chiang, Rong-Chang
Jacobi found that the rotation of a symmetrical heavy top about a fixed point is composed of the two torque -free rotations of two triaxial bodies about their centers of mass. His discovery rests on the fact that the orthogonal matrix which represents the rotation of a symmetrical heavy top is decomposed into a product of two orthogonal matrices, each of which represents the torque-free rotations of two triaxial bodies. This theorem is generalized to the Kirchhoff's case of the rotation and translation of a symmetrical solid in a fluid. This theorem requires the explicit computation, by means of theta functions, of the nine direction cosines between the rotating body axes and the fixed space axes. The addition theorem of theta functions makes it possible to decompose the rotational matrix into a product of similar matrices. This basic idea of utilizing the addition theorem is simple but the carry-through of the computation is quite involved and the full proof turns out to be a lengthy process of computing rather long and complex expressions. For the translational motion we give a new treatment. The position of the center of mass as a function of the time is found by a direct evaluation of the elliptic integral by means of a new theta interpretation of Legendre's reduction formula of the elliptic integral. For the complete solution of the problem we have added further the study of the physical aspects of the motion. Based on a complete examination of the all possible manifolds of the steady helical cases it is possible to obtain a full qualitative description of the motion. Many numerical examples and graphs are given to illustrate the rotation and translation of the solid in a fluid.
Generalized probabilistic theories and conic extensions of polytopes
NASA Astrophysics Data System (ADS)
Fiorini, Samuel; Massar, Serge; Patra, Manas K.; Tiwary, Hans Raj
2015-01-01
Generalized probabilistic theories (GPT) provide a general framework that includes classical and quantum theories. It is described by a cone C and its dual C*. We show that whether some one-way communication complexity problems can be solved within a GPT is equivalent to the recently introduced cone factorization of the corresponding communication matrix M. We also prove an analogue of Holevo's theorem: when the cone C is contained in {{{R}}n}, the classical capacity of the channel realized by sending GPT states and measuring them is bounded by log n. Polytopes and optimising functions over polytopes arise in many areas of discrete mathematics. A conic extension of a polytope is the intersection of a cone C with an affine subspace whose projection onto the original space yields the desired polytope. Extensions of polytopes can sometimes be much simpler geometric objects than the polytope itself. The existence of a conic extension of a polytope is equivalent to that of a cone factorization of the slack matrix of the polytope, on the same cone. We show that all 0/1 polytopes whose vertices can be recognized by a polynomial size circuit, which includes as a special case the travelling salesman polytope and many other polytopes from combinatorial optimization, have small conic extension complexity when the cone is the completely positive cone. Using recent exponential lower bounds on the linear extension complexity of polytopes, this provides an exponential gap between the communication complexity of GPT based on the completely positive cone and classical communication complexity, and a conjectured exponential gap with quantum communication complexity. Our work thus relates the communication complexity of generalizations of quantum theory to questions of mainstream interest in the area of combinatorial optimization.
NASA Technical Reports Server (NTRS)
White, B. S.; Castleman, K. R.
1981-01-01
An important step in the diagnosis of a cervical cytology specimen is estimating the proportions of the various cell types present. This is usually done with a cell classifier, the error rates of which can be expressed as a confusion matrix. We show how to use the confusion matrix to obtain an unbiased estimate of the desired proportions. We show that the mean square error of this estimate depends on a 'befuddlement matrix' derived from the confusion matrix, and how this, in turn, leads to a figure of merit for cell classifiers. Finally, we work out the two-class problem in detail and present examples to illustrate the theory.
Project - line interaction implementing projects in JPL's Matrix
NASA Technical Reports Server (NTRS)
Baroff, Lynn E.
2006-01-01
Can programmatic and line organizations really work interdependently, to accomplish their work as a community? Does the matrix produce a culture in which individuals take personal responsibility for both immediate mission success and long-term growth? What is the secret to making a matrix enterprise actually work? This paper will consider those questions, and propose that developing an effective project-line partnership demands primary attention to personal interactions among people. Many potential problems can be addressed by careful definition of roles, responsibilities, and work processes for both parts of the matrix -- and by deliberate and clear communication between project and line organizations and individuals.
Coulomb matrix elements in multi-orbital Hubbard models.
Bünemann, Jörg; Gebhard, Florian
2017-04-26
Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Matcher, Stephen J.
2013-03-01
We report on a new calibration technique that permits the accurate extraction of sample Jones matrix and hence fast-axis orientation by using fiber-based polarization-sensitive optical coherence tomography (PS-OCT) that is completely based on non polarization maintaining fiber such as SMF-28. In this technique, two quarter waveplates are used to completely specify the parameters of the system fibers in the sample arm so that the Jones matrix of the sample can be determined directly. The device was validated on measurements of a quarter waveplate and an equine tendon sample by a single-mode fiber-based swept-source PS-OCT system.
NASA Astrophysics Data System (ADS)
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
The g Factors of Ground State of Ruby and Their Pressure-Induced Shifts
NASA Astrophysics Data System (ADS)
Ma, Dongping; Zhang, Hongmei; Chen, Jurong; Liu, Yanyun
1998-12-01
By using the theory of pressure-induced shifts and the eigenfunctions at normal and various pressures obtained from the diagonalization of the complete d3 energy matrix adopting C3v symmetry, g factors of the ground state of ruby and their pressure-induced shifts have been calculated. The results are in very good agreement with the experimental data. For the precise calculation of properties of the ground skate, it is necessary to take into account the effects of all the excited states by the diagonalization of the complete energy matrix. The project (Grant No. 19744001) supported by National Natural Science Foundation of China
Cardaropoli, Daniele; Tamagnone, Lorenzo; Roffredo, Alessandro; Gaveglio, Lorena
2014-01-01
Multiple adjacent recession defects were treated in 32 patients using a coronally advanced flap (CAF) with or without a collagen matrix (CM). The percentage of root coverage was 81.49% ± 23.45% (58% complete root coverage) for CAF sites (control) and 93.25% ± 10.01% root coverage (72% complete root coverage) for CM plus CAF sites (test). The results achieved in the test group were significantly greater than in the control group, indicating that CM plus CAF is a suitable option for the treatment of multiple adjacent gingival recessions.
SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Shi, F; Jia, X
Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires accessmore » to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.« less
Analysis of harmonic spline gravity models for Venus and Mars
NASA Technical Reports Server (NTRS)
Bowin, Carl
1986-01-01
Methodology utilizing harmonic splines for determining the true gravity field from Line-Of-Sight (LOS) acceleration data from planetary spacecraft missions was tested. As is well known, the LOS data incorporate errors in the zero reference level that appear to be inherent in the processing procedure used to obtain the LOS vectors. The proposed method offers a solution to this problem. The harmonic spline program was converted from the VAX 11/780 to the Ridge 32C computer. The problem with the matrix inversion routine that improved inversion of the data matrices used in the Optimum Estimate program for global Earth studies was solved. The problem of obtaining a successful matrix inversion for a single rev supplemented by data for the two adjacent revs still remains.
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
van der Burg, Max Post; Tyre, Andrew J
2011-01-01
Wildlife managers often make decisions under considerable uncertainty. In the most extreme case, a complete lack of data leads to uncertainty that is unquantifiable. Information-gap decision theory deals with assessing management decisions under extreme uncertainty, but it is not widely used in wildlife management. So too, robust population management methods were developed to deal with uncertainties in multiple-model parameters. However, the two methods have not, as yet, been used in tandem to assess population management decisions. We provide a novel combination of the robust population management approach for matrix models with the information-gap decision theory framework for making conservation decisions under extreme uncertainty. We applied our model to the problem of nest survival management in an endangered bird species, the Mountain Plover (Charadrius montanus). Our results showed that matrix sensitivities suggest that nest management is unlikely to have a strong effect on population growth rate, confirming previous analyses. However, given the amount of uncertainty about adult and juvenile survival, our analysis suggested that maximizing nest marking effort was a more robust decision to maintain a stable population. Focusing on the twin concepts of opportunity and robustness in an information-gap model provides a useful method of assessing conservation decisions under extreme uncertainty.
FPGA design for constrained energy minimization
NASA Astrophysics Data System (ADS)
Wang, Jianwei; Chang, Chein-I.; Cao, Mang
2004-02-01
The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.
NASA Astrophysics Data System (ADS)
Badia, Santiago; Martín, Alberto F.; Planas, Ramon
2014-10-01
The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 × 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.
A divide and conquer approach to the nonsymmetric eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1991-01-01
Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.
Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.
Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong
2017-05-18
In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.
Flutter analysis using transversality theory
NASA Technical Reports Server (NTRS)
Afolabi, D.
1993-01-01
A new method of calculating flutter boundaries of undamped aeronautical structures is presented. The method is an application of the weak transversality theorem used in catastrophe theory. In the first instance, the flutter problem is cast in matrix form using a frequency domain method, leading to an eigenvalue matrix. The characteristic polynomial resulting from this matrix usually has a smooth dependence on the system's parameters. As these parameters change with operating conditions, certain critical values are reached at which flutter sets in. Our approach is to use the transversality theorem in locating such flutter boundaries using this criterion: at a flutter boundary, the characteristic polynomial does not intersect the axis of the abscissa transversally. Formulas for computing the flutter boundaries and flutter frequencies of structures with two degrees of freedom are presented, and extension to multi-degree of freedom systems is indicated. The formulas have obvious applications in, for instance, problems of panel flutter at supersonic Mach numbers.
Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach
NASA Astrophysics Data System (ADS)
Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun
2015-02-01
The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
NASA Astrophysics Data System (ADS)
Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.
2018-03-01
In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.
Luneburg lens and optical matrix algebra research
NASA Technical Reports Server (NTRS)
Wood, V. E.; Busch, J. R.; Verber, C. M.; Caulfield, H. J.
1984-01-01
Planar, as opposed to channelized, integrated optical circuits (IOCs) were stressed as the basis for computational devices. Both fully-parallel and systolic architectures are considered and the tradeoffs between the two device types are discussed. The Kalman filter approach is a most important computational method for many NASA problems. This approach to deriving a best-fit estimate for the state vector describing a large system leads to matrix sizes which are beyond the predicted capacities of planar IOCs. This problem is overcome by matrix partitioning, and several architectures for accomplishing this are described. The Luneburg lens work has involved development of lens design techniques, design of mask arrangements for producing lenses of desired shape, investigation of optical and chemical properties of arsenic trisulfide films, deposition of lenses both by thermal evaporation and by RF sputtering, optical testing of these lenses, modification of lens properties through ultraviolet irradiation, and comparison of measured lens properties with those expected from ray trace analyses.
A revised version of the transfer matrix method to analyze one-dimensional structures
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1983-01-01
A new and general method to analyze both free and forced vibration characteristics of one-dimensional structures is discussed in this paper. This scheme links for the first time the classical transfer matrix method with the recently developed integrating matrix technique to integrate systems of differential equations. Two alternative approaches to the problem are presented. The first is based upon the lumped parameter model to account for the inertia properties of the structure. The second releases that constraint allowing a more precise description of the physical system. The free vibration of a straight uniform beam under different support conditions is analyzed to test the accuracy of the two models. Finally some results for the free vibration of a 12th order system representing a curved, rotating beam prove that the present method is conveniently extended to more complicated structural dynamics problems.
Du, Tianchuan; Liao, Li; Wu, Cathy H
2016-12-01
Identifying the residues in a protein that are involved in protein-protein interaction and identifying the contact matrix for a pair of interacting proteins are two computational tasks at different levels of an in-depth analysis of protein-protein interaction. Various methods for solving these two problems have been reported in the literature. However, the interacting residue prediction and contact matrix prediction were handled by and large independently in those existing methods, though intuitively good prediction of interacting residues will help with predicting the contact matrix. In this work, we developed a novel protein interacting residue prediction system, contact matrix-interaction profile hidden Markov model (CM-ipHMM), with the integration of contact matrix prediction and the ipHMM interaction residue prediction. We propose to leverage what is learned from the contact matrix prediction and utilize the predicted contact matrix as "feedback" to enhance the interaction residue prediction. The CM-ipHMM model showed significant improvement over the previous method that uses the ipHMM for predicting interaction residues only. It indicates that the downstream contact matrix prediction could help the interaction site prediction.
Jia, Hongjun; Martinez, Aleix M
2009-05-01
The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.
Improved performance in NASTRAN (R)
NASA Technical Reports Server (NTRS)
Chan, Gordon C.
1989-01-01
Three areas of improvement in COSMIC/NASTRAN, 1989 release, were incorporated recently that make the analysis program run faster on large problems. Actual log files and actual timings on a few test samples that were run on IBM, CDC, VAX, and CRAY computers were compiled. The speed improvement is proportional to the problem size and number of continuation cards. Vectorizing certain operations in BANDIT, makes BANDIT run twice as fast in some large problems using structural elements with many node points. BANDIT is a built-in NASTRAN processor that optimizes the structural matrix bandwidth. The VAX matrix packing routine BLDPK was modified so that it is now packing a column of a matrix 3 to 9 times faster. The denser and bigger the matrix, the greater is the speed improvement. This improvement makes a host of routines and modules that involve matrix operation run significantly faster, and saves disc space for dense matrices. A UNIX version, converted from 1988 COSMIC/NASTRAN, was tested successfully on a Silicon Graphics computer using the UNIX V Operating System, with Berkeley 4.3 Extensions. The Utility Modules INPUTT5 and OUTPUT5 were expanded to handle table data, as well as matrices. Both INPUTT5 and OUTPUT5 are general input/output modules that read and write FORTRAN files with or without format. More user informative messages are echoed from PARAMR, PARAMD, and SCALAR modules to ensure proper data values and data types being handled. Two new Utility Modules, GINOFILE and DATABASE, were written for the 1989 release. Seven rigid elements are added to COSMIC/NASTRAN. They are: CRROD, CRBAR, CRTRPLT, CRBE1, CRBE2, CRBE3, and CRSPLINE.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Environmental influences on neural systems of relational complexity
Kalbfleisch, M. Layne; deBettencourt, Megan T.; Kopperman, Rebecca; Banasiak, Meredith; Roberts, Joshua M.; Halavi, Maryam
2013-01-01
Constructivist learning theory contends that we construct knowledge by experience and that environmental context influences learning. To explore this principle, we examined the cognitive process relational complexity (RC), defined as the number of visual dimensions considered during problem solving on a matrix reasoning task and a well-documented measure of mature reasoning capacity. We sought to determine how the visual environment influences RC by examining the influence of color and visual contrast on RC in a neuroimaging task. To specify the contributions of sensory demand and relational integration to reasoning, our participants performed a non-verbal matrix task comprised of color, no-color line, or black-white visual contrast conditions parametrically varied by complexity (relations 0, 1, 2). The use of matrix reasoning is ecologically valid for its psychometric relevance and for its potential to link the processing of psychophysically specific visual properties with various levels of RC during reasoning. The role of these elements is important because matrix tests assess intellectual aptitude based on these seemingly context-less exercises. This experiment is a first step toward examining the psychophysical underpinnings of performance on these types of problems. The importance of this is increased in light of recent evidence that intelligence can be linked to visual discrimination. We submit three main findings. First, color and black-white visual contrast (BWVC) add demand at a basic sensory level, but contributions from color and from BWVC are dissociable in cortex such that color engages a “reasoning heuristic” and BWVC engages a “sensory heuristic.” Second, color supports contextual sense-making by boosting salience resulting in faster problem solving. Lastly, when visual complexity reaches 2-relations, color and visual contrast relinquish salience to other dimensions of problem solving. PMID:24133465
Solution of the determinantal assignment problem using the Grassmann matrices
NASA Astrophysics Data System (ADS)
Karcanias, Nicos; Leventides, John
2016-02-01
The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.
Xia, Huijun; Yang, Kunde; Ma, Yuanliang; Wang, Yong; Liu, Yaxiong
2017-01-01
Generally, many beamforming methods are derived under the assumption of white noise. In practice, the actual underwater ambient noise is complex. As a result, the noise removal capacity of the beamforming method may be deteriorated considerably. Furthermore, in underwater environment with extremely low signal-to-noise ratio (SNR), the performances of the beamforming method may be deteriorated. To tackle these problems, a noise removal method for uniform circular array (UCA) is proposed to remove the received noise and improve the SNR in complex noise environments with low SNR. First, the symmetrical noise sources are defined and the spatial correlation of the symmetrical noise sources is calculated. Then, based on the preceding results, the noise covariance matrix is decomposed into symmetrical and asymmetrical components. Analysis indicates that the symmetrical component only affect the real part of the noise covariance matrix. Consequently, the delay-and-sum (DAS) beamforming is performed by using the imaginary part of the covariance matrix to remove the symmetrical component. However, the noise removal method causes two problems. First, the proposed method produces a false target. Second, the proposed method would seriously suppress the signal when it is located in some directions. To solve the first problem, two methods to reconstruct the signal covariance matrix are presented: based on the estimation of signal variance and based on the constrained optimization algorithm. To solve the second problem, we can design the array configuration and select the suitable working frequency. Theoretical analysis and experimental results are included to demonstrate that the proposed methods are particularly effective in complex noise environments with low SNR. The proposed method can be extended to any array. PMID:28598386
Evidence for Enhanced Matrix Diffusion in Geological Environment
NASA Astrophysics Data System (ADS)
Sato, Kiminori; Fujimoto, Koichiro; Nakata, Masataka; Shikazono, Naotatsu
2013-01-01
Molecular diffusion in rock matrix, called as matrix diffusion, has been appreciated as a static process for elemental migration in geological environment that has been acknowledged in the context of geological disposal of radioactive waste. However, incomprehensible enhancement of matrix diffusion has been reported at a number of field test sites. Here, the matrix diffusion of saline water at Horonobe, Hokkaido, Japan is highlighted directly probing angstrom-scale pores on a field scale up to 1 km by positron--positronium annihilation spectroscopy. The first application of positron--positronium annihilation spectroscopy to field-scale geophysical research reveals the slight variation of angstrom-scale pores influenced by saline water diffusion with complete accuracy. We found widely interconnected 3 Å pores, which offer the pathway of saline water diffusion with the highly enhanced effective matrix diffusion coefficient of 4× 10-6 cm2 s-1. The present findings provide unambiguous evidence that the angstrom-scale pores enhance effective matrix diffusion on a field scale in geological environment.
Novel formulations of CKM matrix renormalization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kniehl, Bernd A.; Sirlin, Alberto
2009-12-17
We review two recently proposed on-shell schemes for the renormalization of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix in the Standard Model. One first constructs gauge-independent mass counterterm matrices for the up- and down-type quarks complying with the hermiticity of the complete mass matrices. Diagonalization of the latter then leads to explicit expressions for the CKM counterterm matrix, which are gauge independent, preserve unitarity, and lead to renormalized amplitudes that are non-singular in the limit in which any two quarks become mass degenerate. One of the schemes also automatically satisfies flavor democracy.
Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G. Patrick; Browne, Jolyon
The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.
Solute Migration from the Aquifer Matrix into a Solution Conduit and the Reverse.
Li, Guangquan; Field, Malcolm S
2016-09-01
A solution conduit has a permeable wall allowing for water exchange and solute transfer between the conduit and its surrounding aquifer matrix. In this paper, we use Laplace Transform to solve a one-dimensional equation constructed using the Euler approach to describe advective transport of solute in a conduit, a production-value problem. Both nonuniform cross-section of the conduit and nonuniform seepage at the conduit wall are considered in the solution. Physical analysis using the Lagrangian approach and a lumping method is performed to verify the solution. Two-way transfer between conduit water and matrix water is also investigated by using the solution for the production-value problem as a first-order approximation. The approximate solution agrees well with the exact solution if dimensionless travel time in the conduit is an order of magnitude smaller than unity. Our analytical solution is based on the assumption that the spatial and/or temporal heterogeneity in the wall solute flux is the dominant factor in the spreading of spring-breakthrough curves, and conduit dispersion is only a secondary mechanism. Such an approach can lead to the better understanding of water exchange and solute transfer between conduits and aquifer matrix. Euler and Lagrangian approaches are used to solve transport in conduit. Two-way transfer between conduit and matrix is investigated. The solution is applicable to transport in conduit of persisting solute from matrix. © 2016, National Ground Water Association.
Relational Processing Following Stroke
ERIC Educational Resources Information Center
Andrews, Glenda; Halford, Graeme S.; Shum, David; Maujean, Annick; Chappell, Mark; Birney, Damian
2013-01-01
The research examined relational processing following stroke. Stroke patients (14 with frontal, 30 with non-frontal lesions) and 41 matched controls completed four relational processing tasks: sentence comprehension, Latin square matrix completion, modified Dimensional Change Card Sorting, and n-back. Each task included items at two or three…
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
A Transfer Learning Approach for Applying Matrix Factorization to Small ITS Datasets
ERIC Educational Resources Information Center
Voß, Lydia; Schatten, Carlotta; Mazziotti, Claudia; Schmidt-Thieme, Lars
2015-01-01
Machine Learning methods for Performance Prediction in Intelligent Tutoring Systems (ITS) have proven their efficacy; specific methods, e.g. Matrix Factorization (MF), however suffer from the lack of available information about new tasks or new students. In this paper we show how this problem could be solved by applying Transfer Learning (TL),…
A Theoretical Investigation into the Inelastic Behavior of Metal-Matrix Composites
1990-06-01
Part 13. Abstract (continued): for the constraining power of the matrix due to eigenstrain accumulation and anisotropy due to fiber reinforcement. The...1 CHAPTER II ELAS Method with Elastic Constraint ......................... 10 * 2.1 Eigenstrain Terminology...10 2.2 Fundamental Equations of Elasticity with Eigenstrains ......... 11 2.3 Eshelby’s Equivalent Inclusion Problem
A Continuous Square Root in Formation Filter-Swoother with Discrete Data Update
NASA Technical Reports Server (NTRS)
Miller, J. K.
1994-01-01
A differential equation for the square root information matrix is derived and adapted to the problems of filtering and smoothing. The resulting continuous square root information filter (SRIF) performs the mapping of state and process noise by numerical integration of the SRIF matrix and admits data via a discrete least square update.
Polarimeter based on video matrix
NASA Astrophysics Data System (ADS)
Pavlov, Andrey; Kontantinov, Oleg; Shmirko, Konstantin; Zubko, Evgenij
2017-11-01
In this paper we present a new measurement tool - polarimeter, based on video matrix. Polarimetric measure- ments are usefull, for example, when monitoring water areas pollutions and atmosphere constituents. New device is small enough to mount on unmanned aircraft vehicles (quadrocopters) and stationary platforms. Device and corresponding software turns it into real-time monitoring system, that helps to solve some research problems.
New ASTM Standards for Nondestructive Testing of Aerospace Composites
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.
2010-01-01
Problem: Lack of consensus standards containing procedural detail for NDE of polymer matrix composite materials: I. Flat panel composites. II. Composite components with more complex geometries a) Pressure vessels: 1) composite overwrapped pressure vessels (COPVs). 2) composite pressure vessels (CPVs). III. Sandwich core constructions. Metal and brittle matrix composites are a possible subject of future effort.
NASA Astrophysics Data System (ADS)
Geng, Xianguo; Liu, Huan
2018-04-01
The Riemann-Hilbert problem for the coupled nonlinear Schrödinger equation is formulated on the basis of the corresponding 3× 3 matrix spectral problem. Using the nonlinear steepest descent method, we obtain leading-order asymptotics for the Cauchy problem of the coupled nonlinear Schrödinger equation.
A Basic Test Theory Generalizable to Tailored Testing. Technical Report No. 1.
ERIC Educational Resources Information Center
Cliff, Norman
Measures of consistency and completeness of order relations derived from test-type data are proposed. The measures are generalized to apply to incomplete data such as tailored testing. The measures are based on consideration of the items-plus-persons by items-plus-persons matrix as an adjacency matrix in which a 1 means that the row element…
Guidance for ePortfolio Researchers: A Case Study with Implications for the ePortfolio Domain
ERIC Educational Resources Information Center
Kennelly, Emily; Osborn, Debra; Reardon, Robert; Shetty, Becka
2016-01-01
This study examined whether or not students using a career ePortfolio, including a matrix for identifying and reflecting on transferrable skills, enabled them to rate their skills more confidently and positively after a simulated (mock) job interview. Three groups were studied: those completing the skills matrix in the ePortfolio; those using the…
Building Generalized Inverses of Matrices Using Only Row and Column Operations
ERIC Educational Resources Information Center
Stuart, Jeffrey
2010-01-01
Most students complete their first and only course in linear algebra with the understanding that a real, square matrix "A" has an inverse if and only if "rref"("A"), the reduced row echelon form of "A", is the identity matrix I[subscript n]. That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix…
Solving the three-body Coulomb breakup problem using exterior complex scaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.
2004-05-17
Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish themore » formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.« less
Rusu, Darian; Stratul, Stefan-Ioan; Festila, Dana; Surlin, Petra; Kasaj, Adrian; Baderca, Flavia; Boariu, Marius; Jentsch, Holger; Locovei, Cosmin; Calenic, Bogdan
2017-01-01
The objective of the present case series is to describe the histology and surface ultrastructure of augmented keratinized gingival mucosa in humans during the early healing phase after surgical placement of a xenogeneic collagen matrix. Six patients underwent surgical augmentation of keratinized tissue by placement of a three-dimensional (3D) xenogeneic collagen matrix. Full-depth mucosal biopsies including original attached gingiva, augmented gingiva, and the separation zone were performed at baseline and at postoperative days 7 and 14. The specimens were stained with hematoxylin-eosin, Masson-trichrome, picrosirius red, and Papanicolaou's trichrome. Low-vacuum scanning electron microscopy (SEM) surface analysis was correlated with histology. The separation zone was clearly visible upon histologic and SEM examination at 7 days. The portions of augmented mucosa consisted of well-structured, immature gingival tissue with characteristics of per secundam healing underlying a completely detached amorphous collagenous membrane-like structure of approximately 100 μm thick. At 14 days, histologic and ultrastructural examinations showed an almost complete maturation process. There were no detectable remnants of the collagen matrix within the newly formed tissues at either time point. Within their limits the results suggest that the 3D collagen matrix appears to play an indirect role during the early phase of wound healing by protecting the newly formed underlying tissue and guiding the epithelialization process.
NASA Technical Reports Server (NTRS)
Tenney, D. R.
1974-01-01
The progress of diffusion-controlled filament-matrix interaction in a metal matrix composite where the filaments and matrix comprise a two-phase binary alloy system was studied by mathematically modeling compositional changes resulting from prolonged elevated temperature exposure. The analysis treats a finite, diffusion-controlled, two-phase moving-interface problem by means of a variable-grid finite-difference technique. The Ni-W system was selected as an example system. Modeling was carried out for the 1000 to 1200 C temperature range for unidirectional composites containing from 6 to 40 volume percent tungsten filaments in a Ni matrix. The results are displayed to show both the change in filament diameter and matrix composition as a function of exposure time. Compositional profiles produced between first and second nearest neighbor filaments were calculated by superposition of finite-difference solutions of the diffusion equations.
Matrix of moments of the Legendre polynomials and its application to problems of electrostatics
NASA Astrophysics Data System (ADS)
Savchenko, A. O.
2017-01-01
In this work, properties of the matrix of moments of the Legendre polynomials are presented and proven. In particular, the explicit form of the elements of the matrix inverse to the matrix of moments is found and theorems of the linear combination and orthogonality are proven. On the basis of these properties, the total charge and the dipole moment of a conducting ball in a nonuniform electric field, the charge distribution over the surface of the conducting ball, its multipole moments, and the force acting on a conducting ball situated on the axis of a nonuniform axisymmetric electric field are determined. All assertions are formulated in theorems, the proofs of which are based on the properties of the matrix of moments of the Legendre polynomials.
Cut and join operator ring in tensor models
NASA Astrophysics Data System (ADS)
Itoyama, H.; Mironov, A.; Morozov, A.
2018-07-01
Recent advancement of rainbow tensor models based on their superintegrability (manifesting itself as the existence of an explicit expression for a generic Gaussian correlator) has allowed us to bypass the long-standing problem seen as the lack of eigenvalue/determinant representation needed to establish the KP/Toda integrability. As the mandatory next step, we discuss in this paper how to provide an adequate designation to each of the connected gauge-invariant operators that form a double coset, which is required to cleverly formulate a tree-algebra generalization of the Virasoro constraints. This problem goes beyond the enumeration problem per se tied to the permutation group, forcing us to introduce a few gauge fixing procedures to the coset. We point out that the permutation-based labeling, which has proven to be relevant for the Gaussian averages is, via interesting complexity, related to the one based on the keystone trees, whose algebra will provide the tensor counterpart of the Virasoro algebra for matrix models. Moreover, our simple analysis reveals the existence of nontrivial kernels and co-kernels for the cut operation and for the join operation respectively that prevent a straightforward construction of the non-perturbative RG-complete partition function and the identification of truly independent time variables. We demonstrate these problems by the simplest non-trivial Aristotelian RGB model with one complex rank-3 tensor, studying its ring of gauge-invariant operators, generated by the keystone triple with the help of four operations: addition, multiplication, cut and join.
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings
NASA Technical Reports Server (NTRS)
Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.
1996-01-01
Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.
Improve Problem Solving Skills through Adapting Programming Tools
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.