Science.gov

Sample records for general sparse linear

  1. SPARSE GENERALIZED FUNCTIONAL LINEAR MODEL FOR PREDICTING REMISSION STATUS OF DEPRESSION PATIENTS

    PubMed Central

    Liu, Yashu; Nie, Zhi; Zhou, Jiayu; Farnum, Michael; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping

    2014-01-01

    Complex diseases such as major depression affect people over time in complicated patterns. Longitudinal data analysis is thus crucial for understanding and prognosis of such diseases and has received considerable attention in the biomedical research community. Traditional classification and regression methods have been commonly applied in a simple (controlled) clinical setting with a small number of time points. However, these methods cannot be easily extended to the more general setting for longitudinal analysis, as they are not inherently built for time-dependent data. Functional regression, in contrast, is capable of identifying the relationship between features and outcomes along with time information by assuming features and/or outcomes as random functions over time rather than independent random variables. In this paper, we propose a novel sparse generalized functional linear model for the prediction of treatment remission status of the depression participants with longitudinal features. Compared to traditional functional regression models, our model enables high-dimensional learning, smoothness of functional coefficients, longitudinal feature selection and interpretable estimation of functional coefficients. Extensive experiments have been conducted on the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) data set and the results show that the proposed sparse functional regression method achieves significantly higher prediction power than existing approaches. PMID:24297562

  2. A general parallel sparse-blocked matrix multiply for linear scaling SCF theory

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt

    2000-06-01

    A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.

  3. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    SciTech Connect

    Young, D.M.; Chen, J.Y.

    1994-12-31

    The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.

  4. Iterative solution of general sparse linear systems on clusters of workstations

    SciTech Connect

    Lo, Gen-Ching; Saad, Y.

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  5. Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems

    NASA Astrophysics Data System (ADS)

    Saad, Yousef; Zhang, Jun

    2001-05-01

    This paper introduces several strategies to deal with pivot blocks in multi-level block incomplete LU factorization (BILUM) preconditioning techniques. These techniques are aimed at increasing the robustness and controlling the amount of fill-ins of BILUM for solving large sparse linear systems when large-size blocks are used to form block-independent set. Techniques proposed in this paper include double-dropping strategies, approximate singular-value decomposition, variable size blocks and use of an arrowhead block submatrix. We point out the advantages and disadvantages of these strategies and discuss their efficient implementations. Numerical experiments are conducted to show the usefulness of the new techniques in dealing with hard-to-solve problems arising from computational fluid dynamics. In addition, we discuss the relation between multi-level ILU preconditioning methods and algebraic multi-level methods.

  6. Sparse linear programming subprogram

    SciTech Connect

    Hanson, R.J.; Hiebert, K.L.

    1981-12-01

    This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.

  7. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  8. Inpainting with sparse linear combinations of exemplars

    SciTech Connect

    Wohlberg, Brendt

    2008-01-01

    We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.

  9. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  10. PSPIKE: A Parallel Hybrid Sparse Linear System Solver

    NASA Astrophysics Data System (ADS)

    Manguoglu, Murat; Sameh, Ahmed H.; Schenk, Olaf

    The availability of large-scale computing platforms comprised of tens of thousands of multicore processors motivates the need for the next generation of highly scalable sparse linear system solvers. These solvers must optimize parallel performance, processor (serial) performance, as well as memory requirements, while being robust across broad classes of applications and systems. In this paper, we present a new parallel solver that combines the desirable characteristics of direct methods (robustness) and effective iterative solvers (low computational cost), while alleviating their drawbacks (memory requirements, lack of robustness). Our proposed hybrid solver is based on the general sparse solver PARDISO, and the “Spike” family of hybrid solvers. The resulting algorithm, called PSPIKE, is as robust as direct solvers, more reliable than classical preconditioned Krylov subspace methods, and much more scalable than direct sparse solvers. We support our performance and parallel scalability claims using detailed experimental studies and comparison with direct solvers, as well as classical preconditioned Krylov methods.

  11. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  12. Anisotropic interpolation of sparse generalized image samples.

    PubMed

    Bourquard, Aurélien; Unser, Michael

    2013-02-01

    Practical image-acquisition systems are often modeled as a continuous-domain prefilter followed by an ideal sampler, where generalized samples are obtained after convolution with the impulse response of the device. In this paper, our goal is to interpolate images from a given subset of such samples. We express our solution in the continuous domain, considering consistent resampling as a data-fidelity constraint. To make the problem well posed and ensure edge-preserving solutions, we develop an efficient anisotropic regularization approach that is based on an improved version of the edge-enhancing anisotropic diffusion equation. Following variational principles, our reconstruction algorithm minimizes successive quadratic cost functionals. To ensure fast convergence, we solve the corresponding sequence of linear problems by using multigrid iterations that are specifically tailored to their sparse structure. We conduct illustrative experiments and discuss the potential of our approach both in terms of algorithmic design and reconstruction quality. In particular, we present results that use as little as 2% of the image samples. PMID:22968212

  13. The efficient parallel iterative solution of large sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1992-06-01

    The development of efficient, general-purpose software for the iterative solution of sparse linear systems on a parallel MIMD computer requires an interesting combination of expertise. Parallel graph heuristics, convergence analysis, and basic linear algebra implementation issues must all be considered. In this paper, we discuss how we have incorporated recent results in these areas into a general-purpose iterative solver. First, we consider two recently developed parallel graph coloring heuristics. We show how the method proposed by Luby, based on determining maximal independent sets, can be modified to run in an asynchronous manner and give aa expected running time bound for this modified heuristic. In addition, a number of graph reduction heuristics are described that are used in our implementation to improve the individual processor performance. The effect of these various graph reductions on the solution of sparse triangular systems is categorized. Finally, we discuss the performance of this solver from the perspective of two large-scale applications: a piezoelectric crystal finite-element modeling problem, and a nonlinear optimization problem to determine the minimum energy configuration of a three-dimensional, layered superconductor model.

  14. A model of asynchronous iterative algorithms for solving large, sparse, linear systems

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Patrick, M. L.

    1984-01-01

    Solving large, sparse, linear systems of equations is one of the fundamental problems in large scale scientific and engineering computation. A model of a general class of asynchronous, iterative solution methods for linear systems is developed. In the model, the system is solved by creating several cooperating tasks that each compute a portion of the solution vector. This model is then analyzed to determine the expected intertask data transfer and task computational complexity as functions of the number of tasks. Based on the analysis, recommendations for task partitioning are made. These recommendations are a function of the sparseness of the linear system, its structure (i.e., randomly sparse or banded), and dimension.

  15. Out-of-Core Solutions of Complex Sparse Linear Equations

    NASA Technical Reports Server (NTRS)

    Yip, E. L.

    1982-01-01

    ETCLIB is library of subroutines for obtaining out-of-core solutions of complex sparse linear equations. Routines apply to dense and sparse matrices too large to be stored in core. Useful for solving any set of linear equations, but particularly useful in cases where coefficient matrix has no special properties that guarantee convergence with any of interative processes. The only assumption made is that coefficient matrix is not singular.

  16. Parallel, iterative solution of sparse linear systems: Models and architectures

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Patrick, M. L.

    1984-01-01

    A model of a general class of asynchronous, iterative solution methods for linear systems is developed. In the model, the system is solved by creating several cooperating tasks that each compute a portion of the solution vector. A data transfer model predicting both the probability that data must be transferred between two tasks and the amount of data to be transferred is presented. This model is used to derive an execution time model for predicting parallel execution time and an optimal number of tasks given the dimension and sparsity of the coefficient matrix and the costs of computation, synchronization, and communication. The suitability of different parallel architectures for solving randomly sparse linear systems is discussed. Based on the complexity of task scheduling, one parallel architecture, based on a broadcast bus, is presented and analyzed.

  17. Recent History Functional Linear Models for Sparse Longitudinal Data

    PubMed Central

    Kim, Kion; Şentürk, Damla; Li, Runze

    2011-01-01

    We consider the recent history functional linear models, relating a longitudinal response to a longitudinal predictor where the predictor process only in a sliding window into the recent past has an effect on the response value at the current time. We propose an estimation procedure for recent history functional linear models that is geared towards sparse longitudinal data, where the observation times across subjects are irregular and total number of measurements per subject is small. The proposed estimation procedure builds upon recent developments in literature for estimation of functional linear models with sparse data and utilizes connections between the recent history functional linear models and varying coefficient models. We establish uniform consistency of the proposed estimators, propose prediction of the response trajectories and derive their asymptotic distribution leading to asymptotic point-wise confidence bands. We include a real data application and simulation studies to demonstrate the efficacy of the proposed methodology. PMID:21691421

  18. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  19. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  20. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    NASA Astrophysics Data System (ADS)

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-11-01

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  1. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    SciTech Connect

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-11-30

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  2. Accelerating sparse linear algebra using graphics processing units

    NASA Astrophysics Data System (ADS)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  3. Scalable Library for the Parallel Solution of Sparse Linear Systems

    Energy Science and Technology Software Center (ESTSC)

    1993-07-14

    BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, eachmore » node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.« less

  4. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  5. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  6. A multi-level method for sparse linear systems

    SciTech Connect

    Shapira, Y.

    1997-09-01

    A multi-level method for the solution of sparse linear systems is introduced. The definition of the method is based on data from the coefficient matrix alone. An upper bound for the condition number is available for certain symmetric positive definite (SPD) problems. Numerical experiments confirm the analysis and illustrate the efficiency of the method for diffusion problems with discontinuous coefficients with discontinuities which are not aligned with the coarse meshes.

  7. lp-lq penalty for sparse linear and sparse multiple kernel multitask learning.

    PubMed

    Rakotomamonjy, Alain; Flamary, Rémi; Gasso, Gilles; Canu, Stéphane

    2011-08-01

    Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on l(p)-l(q) (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) mixed norms as sparsity-inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the l(1)-l(q) penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the l(1)-l(q) penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the l(1)-l(q) proximal operator. Then, for the more general case, when , we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted l(1)-l(q) sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with brain-computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms. PMID:21813358

  8. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes. PMID:26353352

  9. On time delay estimation from a sparse linear prediction perspective.

    PubMed

    He, Hongsen; Yang, Tao; Chen, Jingdong

    2015-02-01

    This paper proposes a sparse linear prediction based algorithm to estimate time difference of arrival. This algorithm unifies the cross correlation method without prewhitening and that with prewhitening via an ℓ2/ℓ1 optimization process, which is solved by an augmented Lagrangian alternating direction method. It also forms a set of time delay estimators that make a tradeoff between prewhitening and non-prewhitening through adjusting a regularization parameter. The effectiveness of the proposed algorithm is demonstrated in noisy and reverberant environments. PMID:25698037

  10. Sparse Substring Pattern Set Discovery Using Linear Programming Boosting

    NASA Astrophysics Data System (ADS)

    Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki

    In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.

  11. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically. PMID:26735744

  12. General linear chirplet transform

    NASA Astrophysics Data System (ADS)

    Yu, Gang; Zhou, Yiqi

    2016-03-01

    Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.

  13. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  14. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  15. A linear geospatial streamflow modeling system for data sparse environments

    USGS Publications Warehouse

    Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James

    2008-01-01

    In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.

  16. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies.

    PubMed

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  17. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    PubMed Central

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  18. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  19. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGESBeta

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; Thornquist, Heidi

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  20. Iterative solutions of sparse linear systems on systolic arrays. Technical report

    SciTech Connect

    Melhem, R.

    1987-03-01

    The idea of grouping the non-zero elements of a sparse matrix into few strips that are almost parallel is applied to the design of a systolic accelerator for sparse matrix operations. This accelerator is, then, integrated into a complete systolic system for the solution of large sparse linear systems of equations. The design demonstrates that the application of systolic arrays is not limited to regular computations, and that computationally irregular problems may be solved on systolic networks if local storage is provided in each systolic cell for buffering the irregularity in the data movement and for absorbing the irregularity in the computation.

  1. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  2. Iterative solution of large, sparse linear systems on a static data flow architecture - Performance studies

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Patrick, M. L.

    1985-01-01

    The applicability of static data flow architectures to the iterative solution of sparse linear systems of equations is investigated. An analytic performance model of a static data flow computation is developed. This model includes both spatial parallelism, concurrent execution in multiple PE's, and pipelining, the streaming of data from array memories through the PE's. The performance model is used to analyze a row partitioned iterative algorithm for solving sparse linear systems of algebraic equations. Based on this analysis, design parameters for the static data flow architecture as a function of matrix sparsity and dimension are proposed.

  3. The search for high level parallelism for the iterative solution of large sparse linear systems

    SciTech Connect

    Young, D.M.

    1988-07-01

    In this paper the author is concerned with the numerical solution, based on iterative methods, of large sparse systems of linear algebraic equations of the type which arise in the numerical solution of elliptic and parabolic partial differential equations by finite difference or finite element methods. He considers linear systems of the form Au = b where A is a given N x N matrix which is large and sparse and where b is a given N x 1 column vector. He will assumes that A is symmetric and positive definite (SPD). He considers iterative algorithms which consist of a basic iterative method, such as the Richardson, Jacobi, SSOR or incomplete Cholesky method, combined with an acceleration procedure such as Chebyshev acceleration or conjugate gradient acceleration. The object of this paper is, however, to examine some high-level methods for achieving parallelism. Such techniques involve only matrix/vector operations and do not involve working with blocks of the matrix, subdividing the region, or using different meshes. It is expected that if effective high-level methods could be developed, they could be combined with block and domain decomposition methods, and related methods, to obtain even greater speedups. It is also expected that by working at a higher level it will eventually be possible to develop general purpose software for parallel machines similar to the ITPACK software packages which have already been developed for sequential and vector machines. The discussion here is primarily devoted to describing various techniques which the author and others have considered for obtaining high-level parallelism. The author plans to continue research on these techniques and eventually to develop algorithms and programs for multiprocessors based on them.

  4. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization

    PubMed Central

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  5. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization.

    PubMed

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  6. LU-decomposition with iterative refinement for solving sparse linear systems

    NASA Astrophysics Data System (ADS)

    Al-Kurdi, Ahmad; Kincaid, David R.

    2006-01-01

    In the solution of a system of linear algebraic equations Ax=b with a large sparse coefficient matrix A, the LU-decomposition with iterative refinement (LUIR) is compared with the LU-decomposition with direct solution (LUDS), which is without iterative refinement. We verify by numerical experiments that the use of sparse matrix techniques with LUIR may result in a reduction of both the computing time and the storage requirements. The powers of a Boolean matrix strategy (PBS) is used in an effort to achieve such a reduction and in an attempt to control the sparsity. We conclude that iterative refinement procedures may be efficiently used as an option in software for the solution of sparse linear systems of equations.

  7. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    SciTech Connect

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  8. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  9. The impact of improved sparse linear solvers on industrial engineering applications

    SciTech Connect

    Heroux, M.; Baddourah, M.; Poole, E.L.; Yang, Chao Wu

    1996-12-31

    There are usually many factors that ultimately determine the quality of computer simulation for engineering applications. Some of the most important are the quality of the analytical model and approximation scheme, the accuracy of the input data and the capability of the computing resources. However, in many engineering applications the characteristics of the sparse linear solver are the key factors in determining how complex a problem a given application code can solve. Therefore, the advent of a dramatically improved solver often brings with it dramatic improvements in our ability to do accurate and cost effective computer simulations. In this presentation we discuss the current status of sparse iterative and direct solvers in several key industrial CFD and structures codes, and show the impact that recent advances in linear solvers have made on both our ability to perform challenging simulations and the cost of those simulations. We also present some of the current challenges we have and the constraints we face in trying to improve these solvers. Finally, we discuss future requirements for sparse linear solvers on high performance architectures and try to indicate the opportunities that exist if we can develop even more improvements in linear solver capabilities.

  10. Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.

    PubMed

    Anzt, H; Quintana-Ortí, E S

    2014-06-28

    While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. PMID:24842036

  11. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  12. Inference of dense spectral reflectance images from sparse reflectance measurement using non-linear regression modeling

    NASA Astrophysics Data System (ADS)

    Deglint, Jason; Kazemzadeh, Farnoud; Wong, Alexander; Clausi, David A.

    2015-09-01

    One method to acquire multispectral images is to sequentially capture a series of images where each image contains information from a different bandwidth of light. Another method is to use a series of beamsplitters and dichroic filters to guide different bandwidths of light onto different cameras. However, these methods are very time consuming and expensive and perform poorly in dynamic scenes or when observing transient phenomena. An alternative strategy to capturing multispectral data is to infer this data using sparse spectral reflectance measurements captured using an imaging device with overlapping bandpass filters, such as a consumer digital camera using a Bayer filter pattern. Currently the only method of inferring dense reflectance spectra is the Wiener adaptive filter, which makes Gaussian assumptions about the data. However, these assumptions may not always hold true for all data. We propose a new technique to infer dense reflectance spectra from sparse spectral measurements through the use of a non-linear regression model. The non-linear regression model used in this technique is the random forest model, which is an ensemble of decision trees and trained via the spectral characterization of the optical imaging system and spectral data pair generation. This model is then evaluated by spectrally characterizing different patches on the Macbeth color chart, as well as by reconstructing inferred multispectral images. Results show that the proposed technique can produce inferred dense reflectance spectra that correlate well with the true dense reflectance spectra, which illustrates the merits of the technique.

  13. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  14. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  15. LANZ: Software solving the large sparse symmetric generalized eigenproblem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    A package, LANZ, for solving the large symmetric generalized eigenproblem is described. The package was tested on four different architectures: Convex 200, CRAY Y-MP, Sun-3, and Sun-4. The package uses a Lanczos' method and is based on recent research into solving the generalized eigenproblem.

  16. Aggregation of sparse linear discriminant analyses for event-related potential classification in brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Zhao, Qibin; Wang, Xingyu; Cichocki, Andrzej

    2014-02-01

    Two main issues for event-related potential (ERP) classification in brain-computer interface (BCI) application are curse-of-dimensionality and bias-variance tradeoff, which may deteriorate classification performance, especially with insufficient training samples resulted from limited calibration time. This study introduces an aggregation of sparse linear discriminant analyses (ASLDA) to overcome these problems. In the ASLDA, multiple sparse discriminant vectors are learned from differently l1-regularized least-squares regressions by exploiting the equivalence between LDA and least-squares regression, and are subsequently aggregated to form an ensemble classifier, which could not only implement automatic feature selection for dimensionality reduction to alleviate curse-of-dimensionality, but also decrease the variance to improve generalization capacity for new test samples. Extensive investigation and comparison are carried out among the ASLDA, the ordinary LDA and other competing ERP classification algorithms, based on different three ERP datasets. Experimental results indicate that the ASLDA yields better overall performance for single-trial ERP classification when insufficient training samples are available. This suggests the proposed ASLDA is promising for ERP classification in small sample size scenario to improve the practicability of BCI. PMID:24344691

  17. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  18. Solving very large, sparse linear systems on mesh-connected parallel computers

    NASA Technical Reports Server (NTRS)

    Opsahl, Torstein; Reif, John

    1987-01-01

    The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.

  19. LANZ - SOFTWARE FOR SOLVING THE LARGE SPARSE SYMMETRIC GENERALIZED EIGENPROBLEM

    NASA Technical Reports Server (NTRS)

    Jones, M. T.

    1994-01-01

    LANZ is a sophisticated algorithm based on the simple Lanczos method for solving the generalized eigenvalue problem. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the basic Lanczos algorithm. The program has been successfully used to solve problems such as: 1) finding the vibration frequencies and mode shape vectors of a structure, and 2) finding the smallest load at which a structure will buckle. Several methods exist for solving the large symmetric generalized eigenvalue problem. LANZ offers an alternative to the popular sub-space iteration approach. The program includes a new algorithm for solving the tri-diagonal matrices that arise when using the Lanczos method. Procedurally, LANZ starts with the user's initial shift, then executes the Lanczos algorithm until: 1) the desired number of eigenvalues is found; 2) no storage space is left; or 3) LANZ determines that a new shift is needed. When a new shift is needed, the program selects it based on accumulated information. At each iteration, LANZ examines the converged and unconverged eigenvalues along with the inertia counts to ensure that no eigenvalues have been missed. LANZ is written in FORTRAN 77 and C language. It was originally designed to run on computers that support vector processing such as the CRAY Y-MP and is therefore optimized for vector machines. Makefiles are included for the Sun3, Sun4, Cray Y-MP, and CONVEX 220. When implemented on a Sun4 computer, LANZ required 670K of main memory. The standard distribution medium for this program is a .25 inch streaming magnetic cartridge tape in Unix tar format. It is also available on a 3.5 inch diskette in UNIX tar format. LANZ was developed in 1989. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. Cray Y-MP is a trademark of Cray Research, Inc. CONVEX 220 is a trademark of Convex Computer Corporation.

  20. Generalized Linear Models in Family Studies

    ERIC Educational Resources Information Center

    Wu, Zheng

    2005-01-01

    Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…

  1. Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Li, Hongbin

    2016-06-01

    We consider the problem of recovering two-dimensional (2-D) block-sparse signals with \\emph{unknown} cluster patterns. Two-dimensional block-sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging. To exploit the block-sparse structure, we introduce a 2-D pattern-coupled hierarchical Gaussian prior model to characterize the statistical pattern dependencies among neighboring coefficients. Unlike the conventional hierarchical Gaussian prior model where each coefficient is associated independently with a unique hyperparameter, the pattern-coupled prior for each coefficient not only involves its own hyperparameter, but also its immediate neighboring hyperparameters. Thus the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage 2-D structured-sparse solutions. An expectation-maximization (EM) strategy is employed to obtain the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. In addition, the generalized approximate message passing (GAMP) algorithm is embedded into the EM framework to efficiently compute an approximation of the posterior distribution of hidden variables, which results in a significant reduction in computational complexity. Numerical results are provided to illustrate the effectiveness of the proposed algorithm.

  2. Online learning and generalization of parts-based image representations by non-negative sparse autoencoders.

    PubMed

    Lemme, Andre; Reinhart, René Felix; Steil, Jochen Jakob

    2012-09-01

    We present an efficient online learning scheme for non-negative sparse coding in autoencoder neural networks. It comprises a novel synaptic decay rule that ensures non-negative weights in combination with an intrinsic self-adaptation rule that optimizes sparseness of the non-negative encoding. We show that non-negativity constrains the space of solutions such that overfitting is prevented and very similar encodings are found irrespective of the network initialization and size. We benchmark the novel method on real-world datasets of handwritten digits and faces. The autoencoder yields higher sparseness and lower reconstruction errors than related offline algorithms based on matrix factorization. It generalizes to new inputs both accurately and without costly computations, which is fundamentally different from the classical matrix factorization approaches. PMID:22706093

  3. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  4. BlockSolve v1.1: Scalable library software for the parallel solution of sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1993-03-01

    BlockSolve is a software library for solving large, sparse systems of linear equations on massively parallel computers. The matrices must be symmetric, but may have an arbitrary sparsity structure. BlockSolve is a portable package that is compatible with several different message-passing pardigms. This report gives detailed instructions on the use of BlockSolve in applications programs.

  5. BlockSolve v1. 1: Scalable library software for the parallel solution of sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1993-03-01

    BlockSolve is a software library for solving large, sparse systems of linear equations on massively parallel computers. The matrices must be symmetric, but may have an arbitrary sparsity structure. BlockSolve is a portable package that is compatible with several different message-passing pardigms. This report gives detailed instructions on the use of BlockSolve in applications programs.

  6. Databased comparison of Sparse Bayesian Learning and Multiple Linear Regression for statistical downscaling of low flow indices

    NASA Astrophysics Data System (ADS)

    Joshi, Deepti; St-Hilaire, André; Daigle, Anik; Ouarda, Taha B. M. J.

    2013-04-01

    SummaryThis study attempts to compare the performance of two statistical downscaling frameworks in downscaling hydrological indices (descriptive statistics) characterizing the low flow regimes of three rivers in Eastern Canada - Moisie, Romaine and Ouelle. The statistical models selected are Relevance Vector Machine (RVM), an implementation of Sparse Bayesian Learning, and the Automated Statistical Downscaling tool (ASD), an implementation of Multiple Linear Regression. Inputs to both frameworks involve climate variables significantly (α = 0.05) correlated with the indices. These variables were processed using Canonical Correlation Analysis and the resulting canonical variates scores were used as input to RVM to estimate the selected low flow indices. In ASD, the significantly correlated climate variables were subjected to backward stepwise predictor selection and the selected predictors were subsequently used to estimate the selected low flow indices using Multiple Linear Regression. With respect to the correlation between climate variables and the selected low flow indices, it was observed that all indices are influenced, primarily, by wind components (Vertical, Zonal and Meridonal) and humidity variables (Specific and Relative Humidity). The downscaling performance of the framework involving RVM was found to be better than ASD in terms of Relative Root Mean Square Error, Relative Mean Absolute Bias and Coefficient of Determination. In all cases, the former resulted in less variability of the performance indices between calibration and validation sets, implying better generalization ability than for the latter.

  7. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    NASA Astrophysics Data System (ADS)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2016-02-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson-Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-ups that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. Overall, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.

  8. Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems

    NASA Astrophysics Data System (ADS)

    Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott

    2016-04-01

    High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.

  9. PCG reference manual: A package for the iterative solution of large sparse linear systems on parallel computers. Version 1.0

    SciTech Connect

    Joubert, W.D.; Carey, G.F.; Kohli, H.; Lorber, A.; McLay, R.T.; Shen, Y.; Berner, N.A. |; Kalhan, A. |

    1995-01-01

    PCG (Preconditioned Conjugate Gradient package) is a system for solving linear equations of the form Au = b, for A a given matrix and b and u vectors. PCG, employing various gradient-type iterative methods coupled with preconditioners, is designed for general linear systems, with emphasis on sparse systems such as these arising from discretization of partial differential equations arising from physical applications. It can be used to solve linear equations efficiently on parallel computer architectures. Much of the code is reusable across architectures and the package is portable across different systems; the machines that are currently supported is listed. This manual is intended to be the general-purpose reference describing all features of the package accessible to the user; suggestions are also given regarding which methods to use for a given problem.

  10. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  11. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    SciTech Connect

    Pinski, Peter; Riplinger, Christoph; Neese, Frank E-mail: frank.neese@cec.mpg.de; Valeev, Edward F. E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  12. Interpretable exemplar-based shape classification using constrained sparse linear models

    NASA Astrophysics Data System (ADS)

    Sigurdsson, Gunnar A.; Yang, Zhen; Tran, Trac D.; Prince, Jerry L.

    2015-03-01

    Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.

  13. Identifying keystone species in the human gut microbiome from metagenomic timeseries using sparse linear regression.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2014-01-01

    Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is now possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the ecological interactions between species directly from sequence data. Any algorithm for inferring ecological interactions must overcome three major obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions due to a statistical problem called "errors-in-variables". Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct "keystone species", Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human gut

  14. Multiconlitron: a general piecewise linear classifier.

    PubMed

    Yujian, Li; Bo, Liu; Xinwu, Yang; Yaozong, Fu; Houjun, Li

    2011-02-01

    Based on the "convexly separable" concept, we present a solid geometric theory and a new general framework to design piecewise linear classifiers for two arbitrarily complicated nonintersecting classes by using a "multiconlitron," which is a union of multiple conlitrons that comprise a set of hyperplanes or linear functions surrounding a convex region for separating two convexly separable datasets. We propose a new iterative algorithm called the cross distance minimization algorithm (CDMA) to compute hard margin non-kernel support vector machines (SVMs) via the nearest point pair between two convex polytopes. Using CDMA, we derive two new algorithms, i.e., the support conlitron algorithm (SCA) and the support multiconlitron algorithm (SMA) to construct support conlitrons and support multiconlitrons, respectively, which are unique and can separate two classes by a maximum margin as in an SVM. Comparative experiments show that SMA can outperform linear SVM on many of the selected databases and provide similar results to radial basis function SVM on some of them, while SCA performs better than linear SVM on three out of four applicable databases. Other experiments show that SMA and SCA may be further improved to draw more potential in the new research direction of piecewise linear learning. PMID:21138800

  15. A new approach for the forward and backward substitutions of parallel solution of sparse linear equations based on dataflow architecture

    SciTech Connect

    Yu, D.C.; Wang, H. )

    1990-05-01

    This paper presents a new parallel computational method to solve the forward and backward substitutions (F/B) of sparse linear equations. The architectural model is a multiprocessor hypercube, based on the Taged Token Dataflow Architecture (TTDA). Communication overhead is considered. The differences of the operating time-units among the subtraction, multiplication, and division are modelled. A processor scheduling algorithm is also introduced. In the algorithm, a highly sparse operational sequence matrix C is developed. From the C matrix, the minimal completion time, the critical path, and the scheduling of the processors for the proposed parallel F/B can be determined. Detailed explanation of the implementation of the TTDA architecture in the proposed method is provided. A number of power systems have been examined and a number of scenarios have been simulated to test the performance of the proposed method. The results are presented and discussed in this paper.

  16. Imaging method for downward-looking sparse linear array three-dimensional synthetic aperture radar based on reweighted atomic norm

    NASA Astrophysics Data System (ADS)

    Bao, Qian; Han, Kuoye; Lin, Yun; Zhang, Bingchen; Liu, Jianguo; Hong, Wen

    2016-01-01

    We propose an imaging algorithm for downward-looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) in the circumstance of cross-track sparse and nonuniform array configuration. Considering the off-grid effect and the resolution improvement, the algorithm combines pseudo-polar formatting algorithm, reweighed atomic norm minimization (RANM), and a parametric relaxation-based cyclic approach (RELAX) to improve the imaging performance with a reduced number of array antennas. RANM is employed in the cross-track imaging after pseudo-polar formatting the DLSLA 3-D SAR echo signal, then the reconstructed results are refined by RELAX. By taking advantage of the reweighted scheme, RANM can improve the resolution of the atomic norm minimization, and outperforms discretized compressive sensing schemes that suffer from off-grid effect. The simulated and real data experiments of DLSLA 3-D SAR verify the performance of the proposed algorithm.

  17. Identification of general linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1983-01-01

    Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.

  18. SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *

    PubMed Central

    Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.

    2014-01-01

    The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844

  19. Preconditioned conjugate gradient algorithms and software for solving large sparse linear systems

    SciTech Connect

    Young, D.M.; Jea, K.C.; Mai, Tsun-Zee

    1987-03-01

    The classical form of the conjugate gradient method (CG method), developed by Hestenes and Stiefel, for solving the linear system Au = b is applicable when the coefficient matrix A is symmetric and positive definite (SPD). In this paper we consider various alternative forms of the CG method as well as generalizations to cases where A is not necessarily SPD. This analysis includes the ''preconditioned conjugate gradient method'' which is equivalent to conjugate gradient acceleration of a basic iterative method corresponding to a preconditioned system. Both the symmetrizable case and the nonsymmetrizable case are considered. For the nonsymmetrizable case there are very few useful theoretical results available. A package of programs, known as ITPACK, has been developed as a tool for carrying out experimental studies on various algorithms. Preliminary conclusions based on experimental results are given. 42 refs.

  20. A research of 3D gravity inversion based on the recovery of sparse underdetermined linear equations

    NASA Astrophysics Data System (ADS)

    Zhaohai, M.

    2014-12-01

    Because of the properties of gravity data, it is made difficult to solve the problem of multiple solutions. There are two main types of 3D gravity inversion methods:One of two methods is based on the improvement of the instability of the sensitive matrix, solving the problem of multiple solutions and instability in 3D gravity inversion. Another is to join weight function into the 3D gravity inversion iteration. Through constant iteration, it can renewal density values and weight function to achieve the purpose to solve the multiple solutions and instability of the 3D gravity data inversion. Thanks to the sparse nature of the solutions of 3D gravity data inversions, we can transform it into a sparse equation. Then, through solving the sparse equations, we can get perfect 3D gravity inversion results. The main principle is based on zero norm of sparse matrix solution of the equation. Zero norm is mainly to solve the nonzero solution of the sparse matrix. However, the method of this article adopted is same as the principle of zero norm. But the method is the opposite of zero norm to obtain zero value solution. Through the form of a Gaussian fitting solution of the zero norm, we can find the solution by using regularization principle. Moreover, this method has been proved that it had a certain resistance to random noise in the mathematics, and it was more suitable than zero norm for the solution of the geophysical data. 3D gravity which is adopted in this article can well identify abnormal body density distribution characteristics, and it can also recognize the space position of abnormal distribution very well. We can take advantage of the density of the upper and lower limit penalty function to make each rectangular residual density within a reasonable range. Finally, this 3D gravity inversion is applied to a variety of combination model test, such as a single straight three-dimensional model, the adjacent straight three-dimensional model and Y three

  1. Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors

    NASA Astrophysics Data System (ADS)

    Singh, Vimal; Tewfik, Ahmed H.

    2011-03-01

    Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error <= (0.23 +/- 0.04)mm and <= (0.30 +/- 0.07)mm for outer & inner surfaces respectively.

  2. Sparse-view X-ray CT Reconstruction via Total Generalized Variation Regularization

    PubMed Central

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as “PWLS-TGV” for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150

  3. Sparse-view x-ray CT reconstruction via total generalized variation regularization

    NASA Astrophysics Data System (ADS)

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-06-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as ‘PWLS-TGV’ for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties.

  4. Sparse-view x-ray CT reconstruction via total generalized variation regularization.

    PubMed

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-06-21

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as 'PWLS-TGV' for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150

  5. On the order of general linear methods.

    SciTech Connect

    Constantinescu, E. M.; Mathematics and Computer Science

    2009-09-01

    General linear (GL) methods are numerical algorithms used to solve ODEs. The standard order conditions analysis involves the GL matrix itself and a starting procedure; however, a finishing method (F) is required to extract the actual ODE solution. The standard order analysis and stability are sufficient for the convergence of any GL method. Nonetheless, using a simple GL scheme, we show that the order definition may be too restrictive. Specifically, the order for GL methods with low order intermediate components may be underestimated. In this note we explore the order conditions for GL schemes and propose a new definition for characterizing the order of GL methods, which is focused on the final result--the outcome of F--and can provide more effective algebraic order conditions.

  6. A new method for spatial resolution enhancement of hyperspectral images using sparse coding and linear spectral unmixing

    NASA Astrophysics Data System (ADS)

    Hashemi, Nezhad Z.; Karami, A.

    2015-10-01

    Hyperspectral images (HSI) have high spectral and low spatial resolutions. However, multispectral images (MSI) usually have low spectral and high spatial resolutions. In various applications HSI with high spectral and spatial resolutions are required. In this paper, a new method for spatial resolution enhancement of HSI using high resolution MSI based on sparse coding and linear spectral unmixing (SCLSU) is introduced. In the proposed method (SCLSU), high spectral resolution features of HSI and high spatial resolution features of MSI are fused. In this case, the sparse representation of some high resolution MSI and linear spectral unmixing (LSU) model of HSI and MSI is simultaneously used in order to construct high resolution HSI (HRHSI). The fusion process of HSI and MSI is formulated as an ill-posed inverse problem. It is solved by the Split Augmented Lagrangian Shrinkage Algorithm (SALSA) and an orthogonal matching pursuit (OMP) algorithm. Finally, the proposed algorithm is applied to the Hyperion and ALI datasets. Compared with the other state-of-the-art algorithms such as Coupled Nonnegative Matrix Factorization (CNMF) and local spectral unmixing, the SCLSU has significantly increased the spatial resolution and in addition the spectral content of HSI is well maintained.

  7. Approximate Orthogonal Sparse Embedding for Dimensionality Reduction.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Yang, Jian; Zhang, David

    2016-04-01

    Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes. PMID:25955995

  8. Sparse generalized pencil of function and its application to system identification and structural health monitoring

    NASA Astrophysics Data System (ADS)

    Mohammadi-Ghazi, Reza; Büyüköztürk, Oral

    2016-04-01

    Singularity expansion method (SEM) is a system identification approach with applications in solving inverse scattering problems, electromagnetic interaction problems, remote sensing, and radars. In this approach, the response of a system is represented in terms of its complex poles; therefore, this method not only extracts the fundamental frequencies of the system from the signal, but also provides sufficient information about system's damping if its transient response is analyzed. There are various techniques in SEM among which the generalized pencil-of-function (GPOF) is the computationally most stable and the least sensitive one to noise. However, SEM methods, including GPOF, suffer from imposition of spurious poles on the expansion of signals due to the lack of apriori information about the number of true poles. In this study we address this problem by proposing sparse generalized pencil-of-function (SGPOF). The proposed method excludes the spurious poles through sparsity-based regularization with ℓ1-norm. This study is backed by numerical examples as well as an application example which employs the proposed technique for structural health monitoring (SHM) and compares the results with other signal processing methods.

  9. Efficient computer architecture for the realization of realtime linear systems with intelligent processing of large sparse matrices

    SciTech Connect

    Chae, S.H.

    1988-01-01

    This dissertation describes an intelligent rule-based iterative parallel algorithm for solving randomly distributed large sparse linear systems of equations, and also the efficient parallel processing computer architecture for the implementation of the algorithm. Implemented with the Jacobi iterative method, the intelligent rule-based algorithm reduces the parallel execution time by reducing the individual inner product operation time. A static dataflow architecture is proposed for implementing the intelligent rule-based iterative parallel algorithm. The dataflow computer architecture has the capability to support parallelism exploited in the algorithm, and the execute the algorithm asynchronously. The proposed computer architecture consists of a main processor, several control processors, scalar slave processors, and pipelined slave processors. Several control processors share with the main processor the heavy burden of allocation of operation packets and of sychronization for parallel processing.

  10. Off-grid direction of arrival estimation based on joint spatial sparsity for distributed sparse linear arrays.

    PubMed

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  11. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  12. A Performance Comparison of the Parallel Preconditioners for Iterative Methods for Large Sparse Linear Systems Arising from Partial Differential Equations on Structured Grids

    NASA Astrophysics Data System (ADS)

    Ma, Sangback

    In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The

  13. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  14. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  15. Collective synchronization as a method of learning and generalization from sparse data

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Tsutsui, Takako

    2008-02-01

    We propose a method for extracting general features from multivariate data using a network of phase oscillators subject to an analogue of the Kuramoto model for collective synchronization. In this method, the natural frequencies of the oscillators are extended to vector quantities to which multivariate data are assigned. The common frequency vectors of the groups of partially synchronized oscillators are interpreted to be the template vectors representing the general features of the data set. We show that the proposed method becomes equivalent to the self-organizing map algorithm devised by Kohonen when the governing equations are linearized about their solutions of partial synchronization. As a case study to test the utility of our method, we applied it to care-needs-certification data in the Japanese public long-term care insurance program, and found major general patterns in the health status of the elderly needing nursing care.

  16. On generalized hamming weighs for Galois ring linear codes

    SciTech Connect

    Ashikhmin, A.

    1997-08-01

    The definition of generalized Hamming weights (GHW) for linear codes over Galois rings is discussed. The properties of GHW for Galois ring linear codes are stated. Upper and existence bounds for GHW of Z{sub 4}-linear codes and a lower bound for GHW of the Kerdock code over Z{sub 4} are derived. GHW of some Z{sub 4}-linear codes are determined.

  17. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes. PMID:20866130

  18. Generalized Multicarrier CDMA: Unification and Linear Equalization

    NASA Astrophysics Data System (ADS)

    Giannakis, Georgios B.; Anghel, Paul A.; Wang, Zhengdao

    2005-12-01

    Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC) quasisynchronous CDMA system that is capable of multiuser interference (MUI) elimination and intersymbol interference (ISI) suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC) CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth) and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.

  19. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  20. Analysis, tuning and comparison of two general sparse solvers for distributed memory computers

    SciTech Connect

    Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.; Li, X.S.

    2000-06-30

    We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differences in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.

  1. Linear stability of general magnetically insulated electron flow

    NASA Astrophysics Data System (ADS)

    Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.

    1984-03-01

    A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.

  2. Linear stability of general magnetically insulated electron flow

    SciTech Connect

    Swegle, J.A.; Mendel, C.W. Jr.; Seidel, D.B.; Quintenz, J.P.

    1984-01-01

    We have formulated a linear stability theory for magnetically insulated systems by linearizing the general 3-D, time-dependent theory of Mendel, Seidel, and Slutz. In the physically interesting case of electron trajectories which are nearly laminar, with only small transverse motion, we have found that several suggestive simplifications occur in the eigenvalue equations.

  3. A Bayesian approach for inducing sparsity in generalized linear models with multi-category response

    PubMed Central

    2015-01-01

    Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345

  4. Linear equations in general purpose codes for stiff ODEs

    SciTech Connect

    Shampine, L. F.

    1980-02-01

    It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)

  5. Beam envelope calculations in general linear coupled lattices

    NASA Astrophysics Data System (ADS)

    Chung, Moses; Qin, Hong; Groening, Lars; Davidson, Ronald C.; Xiao, Chen

    2015-01-01

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  6. Beam envelope calculations in general linear coupled lattices

    SciTech Connect

    Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.

    2015-01-15

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  7. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  8. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  9. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  10. Sparse maps--A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory.

    PubMed

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F; Neese, Frank

    2016-01-14

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  11. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous

  12. The generalized sidelobe canceller based on quaternion widely linear processing.

    PubMed

    Tao, Jian-wu; Chang, Wen-xiu

    2014-01-01

    We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425

  13. Capsule deformation and orientation in general linear flows

    NASA Astrophysics Data System (ADS)

    Szatmary, Alex; Eggleton, Charles

    2010-11-01

    We considered the response of spherical and non-spherical capsules to general flows. (A capsule is an elastic membrane enclosing a fluid, immersed in fluid.) First, we established that nonspherical capsules align with the imposed irrotational linear flow; this means that initial orientation does not affect steady-state capsule deformation, so this steady-state deformation can be determined entirely by the capillary number and the type of flow. The type of flow is characterized by r: r=0 for axisymmetric flows, and r=1 for planar flows; intermediate values of r are combinations of planar and axisymmetric flow. By varying the capillary number and r, all irrotational linear Stokes flows can be generated. For the same capillary number, planar flows lead to more deformation than uniaxial or biaxial extensional flows. Deformation varies monotonically with r, so one can determine bounds on capsule deformation in general flow by only looking at uniaxial, biaxial, and planar flow. These results are applicable to spheres in all linear flows and to ellipsoids in irrotational linear flow.

  14. The Generalized Sidelobe Canceller Based on Quaternion Widely Linear Processing

    PubMed Central

    Tao, Jian-wu; Chang, Wen-xiu

    2014-01-01

    We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425

  15. Generalization of continuous-variable quantum cloning with linear optics

    NASA Astrophysics Data System (ADS)

    Zhai, Zehui; Guo, Juan; Gao, Jiangrui

    2006-05-01

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  16. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  17. Clutter locus equation for more general linear array orientation

    NASA Astrophysics Data System (ADS)

    Bickel, Douglas L.

    2011-06-01

    The clutter locus is an important concept in space-time adaptive processing (STAP) for ground moving target indicator (GMTI) radar systems. The clutter locus defines the expected ground clutter location in the angle-Doppler domain. Typically in literature, the clutter locus is presented as a line, or even a set of ellipsoids, under certain assumptions about the geometry of the array. Most often, the array is assumed to be in the horizontal plane containing the velocity vector. This paper will give a more general 3-dimensional interpretation of the clutter locus for a general linear array orientation.

  18. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success

  19. A general linear model for MEG beamformer imaging.

    PubMed

    Brookes, Matthew J; Gibson, Andrew M; Hall, Stephen D; Furlong, Paul L; Barnes, Gareth R; Hillebrand, Arjan; Singh, Krish D; Holliday, Ian E; Francis, Sue T; Morris, Peter G

    2004-11-01

    A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). PMID:15528094

  20. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  1. Obtaining General Relativity's N-body non-linear Lagrangian from iterative, linear algebraic scaling equations

    NASA Astrophysics Data System (ADS)

    Nordtvedt, K.

    2015-11-01

    A local system of bodies in General Relativity whose exterior metric field asymptotically approaches the Minkowski metric effaces any effects of the matter distribution exterior to its Minkowski boundary condition. To enforce to all orders this property of gravity which appears to hold in nature, a method using linear algebraic scaling equations is developed which generates by an iterative process an N-body Lagrangian expansion for gravity's motion-independent potentials which fulfills exterior effacement along with needed metric potential expansions. Then additional properties of gravity - interior effacement and Lorentz time dilation and spatial contraction - produce additional iterative, linear algebraic equations for obtaining the full non-linear and motion-dependent N-body gravity Lagrangian potentials as well.

  2. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  3. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    SciTech Connect

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  4. Generalization of continuous-variable quantum cloning with linear optics

    SciTech Connect

    Zhai Zehui; Guo Juan; Gao Jiangrui

    2006-05-15

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  5. Generalized space and linear momentum operators in quantum mechanics

    SciTech Connect

    Costa, Bruno G. da

    2014-06-15

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.

  6. General mirror pairs for gauged linear sigma models

    NASA Astrophysics Data System (ADS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-11-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  7. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  8. Marginally specified generalized linear mixed models: a robust approach.

    PubMed

    Mills, J E; Field, C A; Dupuis, D J

    2002-12-01

    Longitudinal data modeling is complicated by the necessity to deal appropriately with the correlation between observations made on the same individual. Building on an earlier nonrobust version proposed by Heagerty (1999, Biometrics 55, 688-698), our robust marginally specified generalized linear mixed model (ROBMS-GLMM) provides an effective method for dealing with such data. This model is one of the first to allow both population-averaged and individual-specific inference. As well, it adopts the flexibility and interpretability of generalized linear mixed models for introducing dependence but builds a regression structure for the marginal mean, allowing valid application with time-dependent (exogenous) and time-independent covariates. These new estimators are obtained as solutions of a robustified likelihood equation involving Huber's least favorable distribution and a collection of weights. Huber's least favorable distribution produces estimates that are resistant to certain deviations from the random effects distributional assumptions. Innovative weighting strategies enable the ROBMS-GLMM to perform well when faced with outlying observations both in the response and covariates. We illustrate the methodology with an analysis of a prospective longitudinal study of laryngoscopic endotracheal intubation, a skill that numerous health-care professionals are expected to acquire. The principal goal of our research is to achieve robust inference in longitudinal analyses. PMID:12495126

  9. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  10. Structured Multifrontal Sparse Solver

    Energy Science and Technology Software Center (ESTSC)

    2014-05-01

    StruMF is an algebraic structured preconditioner for the interative solution of large sparse linear systems. The preconditioner corresponds to a multifrontal variant of sparse LU factorization in which some dense blocks of the factors are approximated with low-rank matrices. It is algebraic in that it only requires the linear system itself, and the approximation threshold that determines the accuracy of individual low-rank approximations. Favourable rank properties are obtained using a block partitioning which is amore » refinement of the partitioning induced by nested dissection ordering.« less

  11. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  12. The left invariant metric in the general linear group

    NASA Astrophysics Data System (ADS)

    Andruchow, E.; Larotonda, G.; Recht, L.; Varela, A.

    2014-12-01

    Left invariant metrics induced by the p-norms of the trace in the matrix algebra are studied on the general linear group. By means of the Euler-Lagrange equations, existence and uniqueness of extremal paths for the length functional are established, and regularity properties of these extremal paths are obtained. Minimizing paths in the group are shown to have a velocity with constant singular values and multiplicity. In several special cases, these geodesic paths are computed explicitly. In particular the Riemannian geodesics, corresponding to the case p = 2, are characterized as the product of two one-parameter groups. It is also shown that geodesics are one-parameter groups if and only if the initial velocity is a normal matrix. These results are further extended to the context of compact operators with p-summable spectrum, where a differential equation for the spectral projections of the velocity vector of an extremal path is obtained.

  13. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  14. The Increase in Animal Mortality Risk following Exposure to Sparsely Ionizing Radiation Is Not Linear Quadratic with Dose

    PubMed Central

    Haley, Benjamin M.; Paunesku, Tatjana; Grdina, David J.; Woloschak, Gayle E.

    2015-01-01

    Introduction The US government regulates allowable radiation exposures relying, in large part, on the seventh report from the committee to estimate the Biological Effect of Ionizing Radiation (BEIR VII), which estimated that most contemporary exposures- protracted or low-dose, carry 1.5 fold less risk of carcinogenesis and mortality per Gy than acute exposures of atomic bomb survivors. This correction is known as the dose and dose rate effectiveness factor for the life span study of atomic bomb survivors (DDREFLSS). It was calculated by applying a linear-quadratic dose response model to data from Japanese atomic bomb survivors and a limited number of animal studies. Methods and Results We argue that the linear-quadratic model does not provide appropriate support to estimate the risk of contemporary exposures. In this work, we re-estimated DDREFLSS using 15 animal studies that were not included in BEIR VII’s original analysis. Acute exposure data led to a DDREFLSS estimate from 0.9 to 3.0. By contrast, data that included both acute and protracted exposures led to a DDREFLSS estimate from 4.8 to infinity. These two estimates are significantly different, violating the assumptions of the linear-quadratic model, which predicts that DDREFLSS values calculated in either way should be the same. Conclusions Therefore, we propose that future estimates of the risk of protracted exposures should be based on direct comparisons of data from acute and protracted exposures, rather than from extrapolations from a linear-quadratic model. The risk of low dose exposures may be extrapolated from these protracted estimates, though we encourage ongoing debate as to whether this is the most valid approach. We also encourage efforts to enlarge the datasets used to estimate the risk of protracted exposures by including both human and animal data, carcinogenesis outcomes, a wider range of exposures, and by making more radiobiology data publicly accessible. We believe that these steps will

  15. General Linear Rf-Current Drive Calculation in Toroidal Plasma

    NASA Astrophysics Data System (ADS)

    Smirnov, A. P.; Harvey, R. W.; Prater, R.

    2009-04-01

    A new general linear calculation of RF current drive has been implemented in the GENRAY all-frequencies RF ray tracing code. This is referred to as the ADJ-QL package, and is based on the Karney, et al. [1] relativistic Green function calculator, ADJ, generalized to non-circular plasmas in toroidal geometry, and coupled with full, bounce-averaged momentum-space RF quasilinear flux [2] expressions calculated at each point along the RF ray trajectories. This approach includes momentum conservation, polarization effects and the influence of trapped electrons. It is assumed that the electron distribution function remains close to a relativistic Maxwellian function. Within the bounds of these assumptions, small banana width, toroidal geometry and low collisionality, the calculation is applicable for all-frequencies RF electron current drive including electron cyclotron, lower hybrid, fast waves and electron Bernstein waves. GENRAY ADJ-QL calculations of the relativistic momentum-conserving current drive have been applied in several cases: benchmarking of electron cyclotron current drive in ITER against other code results; and electron Bernstein and high harmonic fast wave current drive in NSTX. The impacts of momentum conservation on the current drive are also shown for these cases.

  16. The increase in animal mortality risk following exposure to sparsely ionizing radiation is not linear quadratic with dose

    DOE PAGESBeta

    Haley, Benjamin M.; Paunesku, Tatjana; Grdina, David J.; Woloschak, Gayle E.; Aravindan, Natarajan

    2015-12-09

    The US government regulates allowable radiation exposures relying, in large part, on the seventh report from the committee to estimate the Biological Effect of Ionizing Radiation (BEIR VII), which estimated that most contemporary exposures- protracted or low-dose, carry 1.5 fold less risk of carcinogenesis and mortality per Gy than acute exposures of atomic bomb survivors. This correction is known as the dose and dose rate effectiveness factor for the life span study of atomic bomb survivors (DDREFLSS). As a result, it was calculated by applying a linear-quadratic dose response model to data from Japanese atomic bomb survivors and a limitedmore » number of animal studies.« less

  17. The increase in animal mortality risk following exposure to sparsely ionizing radiation is not linear quadratic with dose

    SciTech Connect

    Haley, Benjamin M.; Paunesku, Tatjana; Grdina, David J.; Woloschak, Gayle E.; Aravindan, Natarajan

    2015-12-09

    The US government regulates allowable radiation exposures relying, in large part, on the seventh report from the committee to estimate the Biological Effect of Ionizing Radiation (BEIR VII), which estimated that most contemporary exposures- protracted or low-dose, carry 1.5 fold less risk of carcinogenesis and mortality per Gy than acute exposures of atomic bomb survivors. This correction is known as the dose and dose rate effectiveness factor for the life span study of atomic bomb survivors (DDREFLSS). As a result, it was calculated by applying a linear-quadratic dose response model to data from Japanese atomic bomb survivors and a limited number of animal studies.

  18. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  19. Sparse representation with kernels.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien

    2013-02-01

    Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744

  20. Linear and generalized linear models for the detection of QTL effects on within-subject variability

    PubMed Central

    Wittenburg, Dörte; Guiard, Volker; Liese, Friedrich; Reinsch, Norbert

    2007-01-01

    Summary Quantitative trait loci (QTLs) may affect not only the mean of a trait but also its variability. A special aspect is the variability between multiple measured traits of genotyped animals, such as the within-litter variance of piglet birth weights. The sample variance of repeated measurements is assigned as an observation for every genotyped individual. It is shown that the conditional distribution of the non-normally distributed trait can be approximated by a gamma distribution. To detect QTL effects in the daughter design, a generalized linear model with the identity link function is applied. Suitable test statistics are constructed to test the null hypothesis H0: No QTL with effect on the within-litter variance is segregating versus HA: There is a QTL with effect on the variability of birth weight within litter. Furthermore, estimates of the QTL effect and the QTL position are introduced and discussed. The efficiency of the presented tests is compared with a test based on weighted regression. The error probability of the first type as well as the power of QTL detection are discussed and compared for the different tests. PMID:18208630

  1. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  2. An assessment of estimation methods for generalized linear mixed models with binary outcomes

    PubMed Central

    Capanu, Marinela; Gönen, Mithat; Begg, Colin B.

    2013-01-01

    Two main classes of methodology have been developed for addressing the analytical intractability of generalized linear mixed models (GLMMs): likelihood-based methods and Bayesian methods. Likelihood-based methods such as the penalized quasi-likelihood approach have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods using adaptive Gaussian quadrature perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Bayesian methods, though they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article we introduce a modification of the hybrid approach of Capanu and Begg [1] as a bridge between the likelihood-based and Bayesian approaches by employing Bayesian estimation for the variance components followed by Laplacian estimation for the regression coefficients. We investigate its performance as well as that of several likelihood-based methods in the setting of GLMMs with binary outcomes. We apply the methods to three datasets and conduct simulations to illustrate their properties. Simulation results indicate that for moderate to large numbers of observations per random effect, adaptive Gaussian quadrature and the Laplacian approximation are very accurate, with adaptive Gaussian quadrature preferable as the number of observations per random effect increases. The hybrid approach is overall similar to the Laplace method, and it can be superior for data with very sparse random effects. PMID:23839712

  3. Connections between Generalizing and Justifying: Students' Reasoning with Linear Relationships

    ERIC Educational Resources Information Center

    Ellis, Amy B.

    2007-01-01

    Research investigating algebra students' abilities to generalize and justify suggests that they experience difficulty in creating and using appropriate generalizations and proofs. Although the field has documented students' errors, less is known about what students do understand to be general and convincing. This study examines the ways in which…

  4. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. IV. Linear-scaling second-order explicitly correlated energy with pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Pavošević, Fabijan; Pinski, Peter; Riplinger, Christoph; Neese, Frank; Valeev, Edward F.

    2016-04-01

    We present a formulation of the explicitly correlated second-order Møller-Plesset (MP2-F12) energy in which all nontrivial post-mean-field steps are formulated with linear computational complexity in system size. The two key ideas are the use of pair-natural orbitals for compact representation of wave function amplitudes and the use of domain approximation to impose the block sparsity. This development utilizes the concepts for sparse representation of tensors described in the context of the domain based local pair-natural orbital-MP2 (DLPNO-MP2) method by us recently [Pinski et al., J. Chem. Phys. 143, 034108 (2015)]. Novel developments reported here include the use of domains not only for the projected atomic orbitals, but also for the complementary auxiliary basis set (CABS) used to approximate the three- and four-electron integrals of the F12 theory, and a simplification of the standard B intermediate of the F12 theory that avoids computation of four-index two-electron integrals that involve two CABS indices. For quasi-1-dimensional systems (n-alkanes), the O (" separators="N ) DLPNO-MP2-F12 method becomes less expensive than the conventional O (" separators="N5 ) MP2-F12 for n between 10 and 15, for double- and triple-zeta basis sets; for the largest alkane, C200H402, in def2-TZVP basis, the observed computational complexity is N˜1.6, largely due to the cubic cost of computing the mean-field operators. The method reproduces the canonical MP2-F12 energy with high precision: 99.9% of the canonical correlation energy is recovered with the default truncation parameters. Although its cost is significantly higher than that of DLPNO-MP2 method, the cost increase is compensated by the great reduction of the basis set error due to explicit correlation.

  5. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  6. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  7. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  8. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C., Jr.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  9. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  10. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  11. Impact of the implementation of MPI point-to-point communications on the performance of two general sparse solvers

    SciTech Connect

    Amestoy, Patrick R.; Duff, Iain S.; L'Excellent, Jean-Yves; Li, Xiaoye S.

    2001-10-10

    We examine the mechanics of the send and receive mechanism of MPI and in particular how we can implement message passing in a robust way so that our performance is not significantly affected by changes to the MPI system. This leads us to using the Isend/Irecv protocol which will entail sometimes significant algorithmic changes. We discuss this within the context of two different algorithms for sparse Gaussian elimination that we have parallelized. One is a multifrontal solver called MUMPS, the other is a supernodal solver called SuperLU. Both algorithms are difficult to parallelize on distributed memory machines. Our initial strategies were based on simple MPI point-to-point communication primitives. With such approaches, the parallel performance of both codes are very sensitive to the MPI implementation, the way MPI internal buffers are used in particular. We then modified our codes to use more sophisticated nonblocking versions of MPI communication. This significantly improved the performance robustness (independent of the MPI buffering mechanism) and scalability, but at the cost of increased code complexity.

  12. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  13. A generalization of the Nyquist stability criterion. [extension to multivariable linear feedback systems

    NASA Technical Reports Server (NTRS)

    Stevens, P. K.

    1981-01-01

    This paper presents a generalization of the Nyquist stability criterion to include general multivariable linear stationary systems subject to linear static and dynamic feedback. At the same time, a unifying proof is given for all known versions of the Nyquist criterion for finite dimensional systems.

  14. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  15. A note on rank reduction in sparse multivariate regression

    PubMed Central

    Chen, Kun; Chan, Kung-Sik

    2016-01-01

    A reduced-rank regression with sparse singular value decomposition (RSSVD) approach was proposed by Chen et al. for conducting variable selection in a reduced-rank model. To jointly model the multivariate response, the method efficiently constructs a prespecified number of latent variables as some sparse linear combinations of the predictors. Here, we generalize the method to also perform rank reduction, and enable its usage in reduced-rank vector autoregressive (VAR) modeling to perform automatic rank determination and order selection. We show that in the context of stationary time-series data, the generalized approach correctly identifies both the model rank and the sparse dependence structure between the multivariate response and the predictors, with probability one asymptotically. We demonstrate the efficacy of the proposed method by simulations and analyzing a macro-economical multivariate time series using a reduced-rank VAR model. PMID:26997938

  16. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  17. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  18. Maladaptive Behavioral Consequences of Conditioned Fear-Generalization: A Pronounced, Yet Sparsely Studied, Feature of Anxiety Pathology

    PubMed Central

    van Meurs, Brian; Wiggert, Nicole; Wicker, Isaac; Lissek, Shmuel

    2016-01-01

    Fear-conditioning experiments in the anxiety disorders focus almost exclusively on passive-emotional, Pavlovian conditioning, rather than active-behavioral, instrumental conditioning. Paradigms eliciting both types of conditioning are needed to study maladaptive, instrumental behaviors resulting from Pavlovian abnormalities found in clinical anxiety. One such Pavlovian abnormality is generalization of fear from a conditioned danger-cue (CS+) to resembling stimuli. Though lab-based findings repeatedly link overgeneralized Pavlovian-fear to clinical anxiety, no study assesses the degree to which Pavlovian overgeneralization corresponds with maladaptive, overgeneralized instrumental-avoidance. The current effort fills this gap by validating a novel fear-potentiated startle paradigm including Pavlovian and instrumental components. The paradigm is embedded in a computer game during which shapes appear on the screen. One shape paired with electric-shock serves as CS+, and other resembling shapes, presented in the absence of shock, serve as generalization stimuli (GSs). During the game, participants choose whether to behaviorally avoid shock at the cost of poorer performance. Avoidance during CS+ is considered adaptive because shock is a real possibility. By contrast, avoidance during GSs is considered maladaptive because shock is not a realistic prospect and thus unnecessarily compromises performance. Results indicate significant Pavlovian-instrumental relations, with greater generalization of Pavlovian fear associated with overgeneralization of maladaptive instrumental-avoidance. PMID:24768950

  19. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  20. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676

  1. Voxel selection in FMRI data analysis based on sparse representation.

    PubMed

    Li, Yuanqing; Namburi, Praneeth; Yu, Zhuliang; Guan, Cuntai; Feng, Jianfeng; Gu, Zhenghui

    2009-10-01

    Multivariate pattern analysis approaches toward detection of brain regions from fMRI data have been gaining attention recently. In this study, we introduce an iterative sparse-representation-based algorithm for detection of voxels in functional MRI (fMRI) data with task relevant information. In each iteration of the algorithm, a linear programming problem is solved and a sparse weight vector is subsequently obtained. The final weight vector is the mean of those obtained in all iterations. The characteristics of our algorithm are as follows: 1) the weight vector (output) is sparse; 2) the magnitude of each entry of the weight vector represents the significance of its corresponding variable or feature in a classification or regression problem; and 3) due to the convergence of this algorithm, a stable weight vector is obtained. To demonstrate the validity of our algorithm and illustrate its application, we apply the algorithm to the Pittsburgh Brain Activity Interpretation Competition 2007 functional fMRI dataset for selecting the voxels, which are the most relevant to the tasks of the subjects. Based on this dataset, the aforementioned characteristics of our algorithm are analyzed, and a comparison between our method with the univariate general-linear-model-based statistical parametric mapping is performed. Using our method, a combination of voxels are selected based on the principle of effective/sparse representation of a task. Data analysis results in this paper show that this combination of voxels is suitable for decoding tasks and demonstrate the effectiveness of our method. PMID:19567340

  2. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  3. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  4. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  5. The Logic and Interpretation of Structure Coefficients in Multivariate General Linear Model Analyses.

    ERIC Educational Resources Information Center

    Henson, Robin K.

    In General Linear Model (GLM) analyses, it is important to interpret structure coefficients, along with standardized weights, when evaluating variable contribution to observed effects. Although often used in canonical correlation analysis, structure coefficients are less frequently used in multiple regression and several other multivariate…

  6. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  7. Learning Sparse Representations of Depth

    NASA Astrophysics Data System (ADS)

    Tosic, Ivana; Olshausen, Bruno A.; Culpepper, Benjamin J.

    2011-09-01

    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.

  8. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  9. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    NASA Technical Reports Server (NTRS)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  10. Appearance characterization of linear Lambertian objects, generalized photometric stereo, and illumination-invariant face recognition.

    PubMed

    Zhou, Shaohua Kevin; Aggarwal, Gaurav; Chellappa, Rama; Jacobs, David W

    2007-02-01

    Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented. PMID:17170477

  11. H∞ filtering of Markov jump linear systems with general transition probabilities and output quantization.

    PubMed

    Shen, Mouquan; Park, Ju H

    2016-07-01

    This paper addresses the H∞ filtering of continuous Markov jump linear systems with general transition probabilities and output quantization. S-procedure is employed to handle the adverse influence of the quantization and a new approach is developed to conquer the nonlinearity induced by uncertain and unknown transition probabilities. Then, sufficient conditions are presented to ensure the filtering error system to be stochastically stable with the prescribed performance requirement. Without specified structure imposed on introduced slack variables, a flexible filter design method is established in terms of linear matrix inequalities. The effectiveness of the proposed method is validated by a numerical example. PMID:27129765

  12. Comparison of real-time and linear-response time-dependent density functional theories for molecular chromophores ranging from sparse to high densities of states

    SciTech Connect

    Tussupbayev, Samat; Govind, Niranjan; Lopata, Kenneth A.; Cramer, Christopher J.

    2015-03-10

    We assess the performance of real-time time-dependent density functional theory (RT-TDDFT) for the calculation of absorption spectra of 12 organic dye molecules relevant to photovoltaics and dye sensitized solar cells with 8 exchange-correlation functionals (3 traditional, 3 global hybrids, and 2 range-separated hybrids). We compare the calculations with traditional linear-response (LR) TDDFT. In addition, we demonstrate the efficacy of the RT-TDDFT approach to calculate wide absorption spectra of two large chromophores relevant to photovoltaics and molecular switches.

  13. Non-linear regime of the Generalized Minimal Massive Gravity in critical points

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Adami, H.

    2016-03-01

    The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.

  14. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  15. Linear and nonlinear associations between general intelligence and personality in Project TALENT.

    PubMed

    Major, Jason T; Johnson, Wendy; Deary, Ian J

    2014-04-01

    Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations. PMID:24660993

  16. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  17. Block sparse Cholesky algorithms on advanced uniprocessor computers

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  18. Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models

    PubMed Central

    Xie, Minge; Simpson, Douglas G.; Carroll, Raymond J.

    2009-01-01

    This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety. PMID:19444331

  19. Linear relations in microbial reaction systems: a general overview of their origin, form, and use.

    PubMed

    Noorman, H J; Heijnen, J J; Ch A M Luyben, K

    1991-09-01

    In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable. PMID:18604879

  20. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. PMID:25385093

  1. Unified Einstein-Virasoro Master Equation in the General Non-Linear Sigma Model

    SciTech Connect

    Boer, J. de; Halpern, M.B.

    1996-06-05

    The Virasoro master equation (VME) describes the general affine-Virasoro construction $T=L^abJ_aJ_b+iD^a \\dif J_a$ in the operator algebra of the WZW model, where $L^ab$ is the inverse inertia tensor and $D^a $ is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field $L^ab$ to the background fields of the sigma model. For a particular solution $L_G^ab$, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of $G$-structures on manifolds with torsion.

  2. Numerical study of fourth-order linearized compact schemes for generalized NLS equations

    NASA Astrophysics Data System (ADS)

    Liao, Hong-lin; Shi, Han-sheng; Zhao, Ying

    2014-08-01

    The fourth-order compact approximation for the spatial second-derivative and several linearized approaches, including the time-lagging method of Zhang et al. (1995), the local-extrapolation technique of Chang et al. (1999) and the recent scheme of Dahlby et al. (2009), are considered in constructing fourth-order linearized compact difference (FLCD) schemes for generalized NLS equations. By applying a new time-lagging linearized approach, we propose a symmetric fourth-order linearized compact difference (SFLCD) scheme, which is shown to be more robust in long-time simulations of plane wave, breather, periodic traveling-wave and solitary wave solutions. Numerical experiments suggest that the SFLCD scheme is a little more accurate than some other FLCD schemes and the split-step compact difference scheme of Dehghan and Taleei (2010). Compared with the time-splitting pseudospectral method of Bao et al. (2003), our SFLCD method is more suitable for oscillating solutions or the problems with a rapidly varying potential.

  3. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, M.; Hatfield, J.S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  4. A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.; Noller, Johannes

    2016-08-01

    We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ``Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbations that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.

  5. Sparse matrix techniques applied to modal analysis of multi-section duct liners

    NASA Technical Reports Server (NTRS)

    Arnold, W. R.

    1975-01-01

    A simplified procedure is presented for analysis of ducts with discretely nonuniform properties. The analysis uses basis functions as the generalized coordinates. The duct eigenfunctions are approximated by finite series of these functions. The emphasis is on solution of the resulting large sparse set of linear equations. Characteristics of sparse matrix algorithms are outlined and some criteria for application are established. Analogies with structural methods are used to illustrate variations which can increase efficiency in generating values for design optimization routines. The effects of basis function selection, number of eigenfunctions and identification and ordering of equations on the sparsity and solution stability are included.

  6. Generalized stochastic resonance in a linear fractional system with a random delay

    NASA Astrophysics Data System (ADS)

    Gao, Shi-Long

    2012-12-01

    The generalized stochastic resonance (GSR) phenomena in a linear fractional random-delayed system driven by a weak periodic signal and an additive noise are considered in this paper. A random delay is considered for a linear fractional Langevin equation to describe the intercellular signal transmission and material exchange processes in ion channels. By virtue of the small delay approximation and Laplace transformation, the analytical expression for the amplitude of the first-order steady state moment is obtained. The simulation results show that the amplitude curves as functions of different system parameters behave non-monotonically and exhibit typical characteristics of GSR phenomena. Furthermore, a physical explanation for all the GSR phenomena is given and the cooperative effects of random delay and the fractional memory are also discussed.

  7. The generalized Dirichlet-Neumann map for linear elliptic PDEs and its numerical implementation

    NASA Astrophysics Data System (ADS)

    Sifalakis, A. G.; Fokas, A. S.; Fulton, S. R.; Saridakis, Y. G.

    2008-09-01

    A new approach for analyzing boundary value problems for linear and for integrable nonlinear PDEs was introduced in Fokas [A unified transform method for solving linear and certain nonlinear PDEs, Proc. Roy. Soc. London Ser. A 53 (1997) 1411-1443]. For linear elliptic PDEs, an important aspect of this approach is the characterization of a generalized Dirichlet to Neumann map: given the derivative of the solution along a direction of an arbitrary angle to the boundary, the derivative of the solution perpendicularly to this direction is computed without solving on the interior of the domain. This is based on the analysis of the so-called global relation, an equation which couples known and unknown components of the derivative on the boundary and which is valid for all values of a complex parameter k. A collocation-type numerical method for solving the global relation for the Laplace equation in an arbitrary bounded convex polygon was introduced in Fulton et al. [An analytical method for linear elliptic PDEs and its numerical implementation, J. Comput. Appl. Math. 167 (2004) 465-483]. Here, by choosing a different set of the "collocation points" (values for k), we present a significant improvement of the results in Fulton et al. [An analytical method for linear elliptic PDEs and its numerical implementation, J. Comput. Appl. Math. 167 (2004) 465-483]. The new collocation points lead to well-conditioned collocation methods. Their combination with sine basis functions leads to a collocation matrix whose diagonal blocks are point diagonal matrices yielding efficient implementation of iterative methods; numerical experimentation suggests quadratic convergence. The choice of Chebyshev basis functions leads to higher order convergence, which for regular polygons appear to be exponential.

  8. Prediction of siRNA potency using sparse logistic regression.

    PubMed

    Hu, Wei; Hu, John

    2014-06-01

    RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design. PMID:21091052

  9. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  10. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  11. Standard errors for EM estimates in generalized linear models with random effects.

    PubMed

    Friedl, H; Kauermann, G

    2000-09-01

    A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulas are used to approximate the integrals in the EM algorithm, where two different approaches are pursued, i.e., Gauss-Hermite quadrature in the case of Gaussian random effects and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations. PMID:10985213

  12. Flexible analysis of digital PCR experiments using generalized linear mixed models.

    PubMed

    Vynck, Matthijs; Vandesompele, Jo; Nijs, Nele; Menten, Björn; De Ganck, Ariane; Thas, Olivier

    2016-09-01

    The use of digital PCR for quantification of nucleic acids is rapidly growing. A major drawback remains the lack of flexible data analysis tools. Published analysis approaches are either tailored to specific problem settings or fail to take into account sources of variability. We propose the generalized linear mixed models framework as a flexible tool for analyzing a wide range of experiments. We also introduce a method for estimating reference gene stability to improve accuracy and precision of copy number and relative expression estimates. We demonstrate the usefulness of the methodology on a complex experimental setup. PMID:27551671

  13. Location-scale cumulative odds models for ordinal data: a generalized non-linear model approach.

    PubMed

    Cox, C

    1995-06-15

    Proportional odds regression models for multinomial probabilities based on ordered categories have been generalized in two somewhat different directions. Models having scale as well as location parameters for adjustment of boundaries (on an unobservable, underlying continuum) between categories have been employed in the context of ROC analysis. Partial proportional odds models, having different regression adjustments for different multinomial categories, have also been proposed. This paper considers a synthesis and further generalization of these two families. With use of a number of examples, I discuss and illustrate properties of this extended family of models. Emphasis is on the computation of maximum likelihood estimates of parameters, asymptotic standard deviations, and goodness-of-fit statistics with use of non-linear regression programs in standard statistical software such as SAS. PMID:7667560

  14. Spatial temporal disaggregation of daily rainfall from a generalized linear model

    NASA Astrophysics Data System (ADS)

    Segond, M.-L.; Onof, C.; Wheater, H. S.

    2006-12-01

    SummaryThis paper describes a methodology for continuous simulation of spatially-distributed hourly rainfall, based on observed data from a daily raingauge network. Generalized linear models (GLMs), which can represent the spatial and temporal non-stationarities of multi-site daily rainfall (Chandler, R.E., Wheater, H.S., 2002. Analysis of rainfall variability using generalised linear models: a case study from the west of Ireland. Water Resources Research, 38 (10), 1192. doi:10.1029/2001WR000906), are combined with a single-site disaggregation model based on Poisson cluster processes (Koutsoyiannis, D., Onof, C., 2001. Rainfall disaggregation using adjusting procedures on a Poisson cluster model. Journal of Hydrology 246, 109-122). The resulting sub-daily temporal profile is then applied linearly to all sites over the catchment to reproduce the spatially-varying daily totals. The method is tested for the River Lee catchment, UK, a tributary of the Thames covering an area of 1400 km 2. Twenty simulations of 12 years of hourly rainfall are generated at 20 sites and compared with the historical series. The proposed model preserves most standard statistics but has some limitations in the representation of extreme rainfall and the correlation structure. The method can be extended to sites within the modelled region not used in the model calibration.

  15. Digit Span is (mostly) related linearly to general intelligence: Every extra bit of span counts.

    PubMed

    Gignac, Gilles E; Weiss, Lawrence G

    2015-12-01

    Historically, Digit Span has been regarded as a relatively poor indicator of general intellectual functioning (g). In fact, Wechsler (1958) contended that beyond an average level of Digit Span performance, there was little benefit to possessing a greater memory span. Although Wechsler's position does not appear to have ever been tested empirically, it does appear to have become clinical lore. Consequently, the purpose of this investigation was to test Wechsler's contention on the Wechsler Adult Intelligence Scale-Fourth Edition normative sample (N = 1,800; ages: 16 - 69). Based on linear and nonlinear contrast analyses of means, as well as linear and nonlinear bifactor model analyses, all 3 Digit Span indicators (LDSF, LDSB, and LDSS) were found to exhibit primarily linear associations with FSIQ/g. Thus, the commonly held position that Digit Span performance beyond an average level is not indicative of greater intellectual functioning was not supported. The results are discussed in light of the increasing evidence across multiple domains that memory span plays an important role in intellectual functioning. PMID:25774642

  16. Thermodynamic bounds and general properties of optimal efficiency and power in linear responses.

    PubMed

    Jiang, Jian-Hua

    2014-10-01

    We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches. PMID:25375457

  17. Sparse Regulatory Networks

    PubMed Central

    James, Gareth M.; Sabatti, Chiara; Zhou, Nengfeng; Zhu, Ji

    2011-01-01

    In many organisms the expression levels of each gene are controlled by the activation levels of known “Transcription Factors” (TF). A problem of considerable interest is that of estimating the “Transcription Regulation Networks” (TRN) relating the TFs and genes. While the expression levels of genes can be observed, the activation levels of the corresponding TFs are usually unknown, greatly increasing the difficulty of the problem. Based on previous experimental work, it is often the case that partial information about the TRN is available. For example, certain TFs may be known to regulate a given gene or in other cases a connection may be predicted with a certain probability. In general, the biology of the problem indicates there will be very few connections between TFs and genes. Several methods have been proposed for estimating TRNs. However, they all suffer from problems such as unrealistic assumptions about prior knowledge of the network structure or computational limitations. We propose a new approach that can directly utilize prior information about the network structure in conjunction with observed gene expression data to estimate the TRN. Our approach uses L1 penalties on the network to ensure a sparse structure. This has the advantage of being computationally efficient as well as making many fewer assumptions about the network structure. We use our methodology to construct the TRN for E. coli and show that the estimate is biologically sensible and compares favorably with previous estimates. PMID:21625366

  18. Regionalization of Parameters of the Continuous Rainfall-Runoff model Based on Bayesian Generalized Linear Model

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Jeong; Kim, Ki-Young; Shin, Dong-Hoon; Kwon, Hyun-Han

    2015-04-01

    It has been widely acknowledged that the appropriate simulation of natural streamflow at ungauged sites is one of the fundamental challenges to hydrology community. In particular, the key to reliable runoff simulation in ungauged basins is a reliable rainfall-runoff model and a parameter estimation. In general, parameter estimation in rainfall-runoff models is a complex issue due to an insufficient hydrologic data. This study aims to regionalize the parameters of the continuous rainfall-runoff model in conjunction with Bayesian statistical techniques to facilitate uncertainty analysis. First, this study uses the Bayesian Markov Chain Monte Carlo scheme for the Sacramento rainfall-runoff model that has been widely used around the world. The Sacramento model is calibrated against daily runoff observation, and thirteen parameters of the model are optimized as well as posterior distributor distributions for each parameter are derived. Second, we applied Bayesian generalized linear regression model to set of the parameters with basin characteristics (e.g. area and slope), to obtain a functional relationship between pairs of variables. The proposed model was validated in two gauged watersheds in accordance with the efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of efficiency, index of agreement and coefficient of correlation. The future study will be further focused on uncertainty analysis to fully incorporate propagation of the uncertainty into the regionalization framework. KEYWORDS: Ungauge, Parameter, Sacramento, Generalized linear model, Regionalization Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).

  19. Lattice structure for generalized-support multidimensional linear phase perfect reconstruction filter bank.

    PubMed

    Gao, Xieping; Li, Bodong; Xiao, Fen

    2013-12-01

    Multidimensional linear phase perfect reconstruction filter bank (MDLPPRFB) can be designed and implemented via lattice structure. The lattice structure for the MDLPPRFB with filter support N(MΞ) has been published by Muramatsu , where M is the decimation matrix, Ξ is a positive integer diagonal matrix, and N(N) denotes the set of integer vectors in the fundamental parallelepiped of the matrix N. Obviously, if Ξ is chosen to be other positive diagonal matrices instead of only positive integer ones, the corresponding lattice structure would provide more choices of filter banks, offering better trade-off between filter support and filter performance. We call such resulted filter bank as generalized-support MDLPPRFB (GSMDLPPRFB). The lattice structure for GSMDLPPRFB, however, cannot be designed by simply generalizing the process that Muramatsu employed. Furthermore, the related theories to assist the design also become different from those used by Muramatsu . Such issues will be addressed in this paper. To guide the design of GSMDLPPRFB, the necessary and sufficient conditions are established for a generalized-support multidimensional filter bank to be linear-phase. To determine the cases we can find a GSMDLPPRFB, the necessary conditions about the existence of it are proposed to be related with filter support and symmetry polarity (i.e., the number of symmetric filters ns and antisymmetric filters na). Based on a process (different from the one Muramatsu used) that combines several polyphase matrices to construct the starting block, one of the core building blocks of lattice structure, the lattice structure for GSMDLPPRFB is developed and shown to be minimal. Additionally, the result in this paper includes Muramatsu's as a special case. PMID:23974625

  20. Dose-shaping using targeted sparse optimization

    SciTech Connect

    Sayre, George A.; Ruan, Dan

    2013-07-15

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot

  1. The heritability of general cognitive ability increases linearly from childhood to young adulthood.

    PubMed

    Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R

    2010-11-01

    Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities. PMID:19488046

  2. A General Linear Relaxometry Model of R1 Using Imaging Data

    PubMed Central

    Callaghan, Martina F; Helms, Gunther; Lutti, Antoine; Mohammadi, Siawoosh; Weiskopf, Nikolaus

    2015-01-01

    Purpose The longitudinal relaxation rate (R1) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. Methods Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2*) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. Results The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. Conclusion A single set of global coefficients can be used to relate R1, MT, and R2* across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309–1314, 2015. © 2014 Wiley Periodicals, Inc. PMID:24700606

  3. Linear stability of a generalized multi-anticipative car following model with time delays

    NASA Astrophysics Data System (ADS)

    Ngoduy, D.

    2015-05-01

    In traffic flow, the multi-anticipative driving behavior describes the reaction of a vehicle to the driving behavior of many vehicles in front where as the time delay is defined as a physiological parameter reflecting the period of time between perceiving a stimulus of leading vehicles and performing a relevant action such as acceleration or deceleration. A lot of effort has been undertaken to understand the effects of either multi-anticipative driving behavior or time delays on traffic flow dynamics. This paper is a first attempt to analytically investigate the dynamics of a generalized class of car-following models with multi-anticipative driving behavior and different time delays associated with such multi-anticipations. To this end, this paper puts forwards to deriving the (long-wavelength) linear stability condition of such a car-following model and study how the combination of different choices of multi-anticipations and time delays affects the instabilities of traffic flow with respect to a small perturbation. It is found that the effect of delays and multi-anticipations are model-dependent, that is, the destabilization effect of delays is suppressed by the stabilization effect of multi-anticipations. Moreover, the weight factor reflecting the distribution of the driver's sensing to the relative gaps of leading vehicles is less sensitive to the linear stability condition of traffic flow than the weight factor for the relative speed of those leading vehicles.

  4. Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models

    PubMed Central

    Elliott, Michael R.

    2012-01-01

    In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create “data driven” weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical. PMID:23275683

  5. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  6. Generalized Linear Models for Identifying Predictors of the Evolutionary Diffusion of Viruses

    PubMed Central

    Beard, Rachel; Magee, Daniel; Suchard, Marc A.; Lemey, Philippe; Scotch, Matthew

    2014-01-01

    Bioinformatics and phylogeography models use viral sequence data to analyze spread of epidemics and pandemics. However, few of these models have included analytical methods for testing whether certain predictors such as population density, rates of disease migration, and climate are drivers of spatial spread. Understanding the specific factors that drive spatial diffusion of viruses is critical for targeting public health interventions and curbing spread. In this paper we describe the application and evaluation of a model that integrates demographic and environmental predictors with molecular sequence data. The approach parameterizes evolutionary spread of RNA viruses as a generalized linear model (GLM) within a Bayesian inference framework using Markov chain Monte Carlo (MCMC). We evaluate this approach by reconstructing the spread of H5N1 in Egypt while assessing the impact of individual predictors on evolutionary diffusion of the virus. PMID:25717395

  7. Generalized linear joint PP-PS inversion based on two constraints

    NASA Astrophysics Data System (ADS)

    Fang, Yuan; Zhang, Feng-Qi; Wang, Yan-Chun

    2016-03-01

    Conventional joint PP—PS inversion is based on approximations of the Zoeppritz equations and assumes constant VP/VS; therefore, the inversion precision and stability cannot satisfy current exploration requirements. We propose a joint PP—PS inversion method based on the exact Zoeppritz equations that combines Bayesian statistics and generalized linear inversion. A forward model based on the exact Zoeppritz equations is built to minimize the error of the approximations in the large-angle data, the prior distribution of the model parameters is added as a regularization item to decrease the ill-posed nature of the inversion, low-frequency constraints are introduced to stabilize the low-frequency data and improve robustness, and a fast algorithm is used to solve the objective function while minimizing the computational load. The proposed method has superior antinoising properties and well reproduces real data.

  8. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    PubMed Central

    Taylor, Douglas J.; Muller, Keith E.

    2013-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272

  9. Sparse representation for vehicle recognition

    NASA Astrophysics Data System (ADS)

    Monnig, Nathan D.; Sakla, Wesam

    2014-06-01

    The Sparse Representation for Classification (SRC) algorithm has been demonstrated to be a state-of-the-art algorithm for facial recognition applications. Wright et al. demonstrate that under certain conditions, the SRC algorithm classification performance is agnostic to choice of linear feature space and highly resilient to image corruption. In this work, we examined the SRC algorithm performance on the vehicle recognition application, using images from the semi-synthetic vehicle database generated by the Air Force Research Laboratory. To represent modern operating conditions, vehicle images were corrupted with noise, blurring, and occlusion, with representation of varying pose and lighting conditions. Experiments suggest that linear feature space selection is important, particularly in the cases involving corrupted images. Overall, the SRC algorithm consistently outperforms a standard k nearest neighbor classifier on the vehicle recognition task.

  10. Sparse matrix-vector multiplication on a reconfigurable supercomputer

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M; Poole, Steve

    2008-01-01

    Double precision floating point Sparse Matrix-Vector Multiplication (SMVM) is a critical computational kernel used in iterative solvers for systems of sparse linear equations. The poor data locality exhibited by sparse matrices along with the high memory bandwidth requirements of SMVM result in poor performance on general purpose processors. Field Programmable Gate Arrays (FPGAs) offer a possible alternative with their customizable and application-targeted memory sub-system and processing elements. In this work we investigate two separate implementations of the SMVM on an SRC-6 MAPStation workstation. The first implementation investigates the peak performance capability, while the second implementation balances the amount of instantiated logic with the available sustained bandwidth of the FPGA subsystem. Both implementations yield the same sustained performance with the second producing a much more efficient solution. The metrics of processor and application balance are introduced to help provide some insight into the efficiencies of the FPGA and CPU based solutions explicitly showing the tight coupling of the available bandwidth to peak floating point performance. Due to the FPGA's ability to balance the amount of implemented logic to the available memory bandwidth it can provide a much more efficient solution. Finally, making use of the lessons learned implementing the SMVM, we present an fully implemented nonpreconditioned Conjugate Gradient Algorithm utilizing the second SMVM design.

  11. Extracting H I cosmological signal with generalized needlet internal linear combination

    NASA Astrophysics Data System (ADS)

    Olivari, L. C.; Remazeilles, M.; Dickinson, C.

    2016-03-01

    H I intensity mapping is a new observational technique to map fluctuations in the large-scale structure of matter using the 21 cm emission line of atomic hydrogen (H I). Sensitive H I intensity mapping experiments have the potential to detect Baryon Acoustic Oscillations at low redshifts (z ≲ 1) in order to constrain the properties of dark energy. Observations of the H I signal will be contaminated by instrumental noise and, more significantly, by astrophysical foregrounds, such as Galactic synchrotron emission, which is at least four orders of magnitude brighter than the H I signal. Foreground cleaning is recognized as one of the key challenges for future radio astronomy surveys. We study the ability of the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological H I signal for a general H I intensity mapping experiment. The GNILC method is a new technique that uses both frequency and spatial information to separate the components of the observed data. Our results show that the method is robust to the complexity of the foregrounds. For simulated radio observations including H I emission, Galactic synchrotron, Galactic free-free, radio sources, and 0.05 mK thermal noise, we find that the GNILC method can reconstruct the H I power spectrum for multipoles 30 < ℓ < 150 with 6 per cent accuracy on 50 per cent of the sky for a redshift z ˜ 0.25.

  12. Unification of the general non-linear sigma model and the Virasoro master equation

    SciTech Connect

    Boer, J. de; Halpern, M.B. |

    1997-06-01

    The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfy the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.

  13. Generalized linear transport theory in dilute neutral gases and dispersion relation of sound waves.

    PubMed

    Bendib, A; Bendib-Kalache, K; Gombert, M M; Imadouchene, N

    2006-10-01

    The transport processes in dilute neutral gases are studied by using the kinetic equation with a collision relaxation model that meets all conservation requirements. The kinetic equation is solved keeping the whole anisotropic part of the distribution function with the use of the continued fractions. The conservative laws of the collision operator are taken into account with the projection operator techniques. The generalized heat flux and stress tensor are calculated in the linear approximation, as functions of the lower moments, i.e., the density, the flow velocity and the temperature. The results obtained are valid for arbitrary collision frequency nu with the respect to kv(t) and the characteristic frequency omega, where k(-1) is the characteristic length scale of the system and v(t) is the thermal velocity. The transport coefficients constitute accurate closure relations for the generalized hydrodynamic equations. An application to the dispersion and the attenuation of sound waves in the whole collisionality regime is presented. The results obtained are in very good agreement with the experimental data. PMID:17155048

  14. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  15. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  16. Native ultrametricity of sparse random ensembles

    NASA Astrophysics Data System (ADS)

    Avetisov, V.; Krapivsky, P. L.; Nechaev, S.

    2016-01-01

    We investigate the eigenvalue density in ensembles of large sparse Bernoulli random matrices. Analyzing in detail the spectral density of ensembles of linear subgraphs, we discuss its ultrametric nature and show that near the spectrum boundary, the tails of the spectral density exhibit a Lifshitz singularity typical for Anderson localization. We pay attention to an intriguing connection of the spectral density to the Dedekind η-function. We conjecture that ultrametricity emerges in rare-event statistics and is inherit to generic complex sparse systems.

  17. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  18. Characterization of linear diattenuator and retarders using a two-modulator generalized ellipsometer (2-MGE)

    NASA Astrophysics Data System (ADS)

    Jellison, Gerald E., Jr.; Griffiths, C. Owen; Holcomb, David E.; Rouleau, Christopher M.

    2002-09-01

    The two-modulator generalized ellipsometer (2-MGE) is a spectroscopic polarization-sensitive optical instrument that is sensitive to both standard ellipsometric parameters from isotropic samples as well as cross polarization terms arising from anisotropic samples. In reflection mode, teh 2-MGE has been used to measure the complex dielectric functions of several uniaxial crystals, including TiO2, ZnO, and BiI3. The 2-MGE can also be used in the transmission mode, in which the complete Mueller matrix of a sample can be determined (using 4 zone measurements). If the sample is a linear diattenuator and retarder, then only a single zone is required to determine the sample retardation, diattenuation, the principal axis direction, and the depolarization. These measurements have been performed in two different modes: 1) Spectroscopic, where the current wavelength limits are 260 to 850 nm, and 2) Spatially resolved (Current resolution ~30-50 microns) at a single wavelength. The latter mode results in retardation, linear diattenuation, and principal axis direction "maps" of the sample. Two examples are examined in this paper. First, a simple Polaroid film polarizer is measured, where it is seen that the device behaves nearly ideally in its design wavelength range (visible), but acts more as a retarder in the infrared. Second, congruently grown LiNbO3 is examined under bias. These results show that there are significant variations in the electric field-Pockels coefficient product within the material. Spectroscopic measurements are used to determine the dispersion of the r22 Pockels coefficient.

  19. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  20. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  1. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  2. Fast Solution in Sparse LDA for Binary Classification

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic

  3. Efficient analysis of Q-level nested hierarchical general linear models given ignorable missing data.

    PubMed

    Shin, Yongyun; Raudenbush, Stephen W

    2013-01-01

    This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children. PMID:24077621

  4. Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation

    NASA Technical Reports Server (NTRS)

    Moore, T. E.; Khazanov, G. V.

    2011-01-01

    Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].

  5. The overlooked potential of Generalized Linear Models in astronomy-II: Gamma regression and photometric redshifts

    NASA Astrophysics Data System (ADS)

    Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.

    2015-04-01

    Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.

  6. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  7. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means. PMID:26839052

  8. Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity

    SciTech Connect

    Bellini, Emilio; Sawicki, Ignacy E-mail: ignacy.sawicki@outlook.com

    2014-07-01

    We present a turnkey solution, ready for implementation in numerical codes, for the study of linear structure formation in general scalar-tensor models involving a single universally coupled scalar field. We show that the totality of cosmological information on the gravitational sector can be compressed — without any redundancy — into five independent and arbitrary functions of time only and one constant. These describe physical properties of the universe: the observable background expansion history, fractional matter density today, and four functions of time describing the properties of the dark energy. We show that two of those dark-energy property functions control the existence of anisotropic stress, the other two — dark-energy clustering, both of which are can be scale-dependent. All these properties can in principle be measured, but no information on the underlying theory of acceleration beyond this can be obtained. We present a translation between popular models of late-time acceleration (e.g. perfect fluids, f(R), kinetic gravity braiding, galileons), as well as the effective field theory framework, and our formulation. In this way, implementing this formulation numerically would give a single tool which could consistently test the majority of models of late-time acceleration heretofore proposed.

  9. Master equation solutions in the linear regime of characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M., C. E.; de Araujo, J. C. N.

    2015-12-01

    From the field equations in the linear regime of the characteristic formulation of general relativity, Bishop, for a Schwarzschild's background, and Mädler, for a Minkowski's background, were able to show that it is possible to derive a fourth order ordinary differential equation, called master equation, for the J metric variable of the Bondi-Sachs metric. Once β , another Bondi-Sachs potential, is obtained from the field equations, and J is obtained from the master equation, the other metric variables are solved integrating directly the rest of the field equations. In the past, the master equation was solved for the first multipolar terms, for both the Minkowski's and Schwarzschild's backgrounds. Also, Mädler recently reported a generalisation of the exact solutions to the linearised field equations when a Minkowski's background is considered, expressing the master equation family of solutions for the vacuum in terms of Bessel's functions of the first and the second kind. Here, we report new solutions to the master equation for any multipolar moment l , with and without matter sources in terms only of the first kind Bessel's functions for the Minkowski, and in terms of the Confluent Heun's functions (Generalised Hypergeometric) for radiative (nonradiative) case in the Schwarzschild's background. We particularize our families of solutions for the known cases for l =2 reported previously in the literature and find complete agreement, showing the robustness of our results.

  10. The overlooked potential of Generalized Linear Models in astronomy, I: Binomial regression

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Cameron, E.; Killedar, M.; Hilbe, J.; Vilalta, R.; Maio, U.; Biffi, V.; Ciardi, B.; Riggs, J. D.

    2015-09-01

    Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific enquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper-the first in a series aimed at illustrating the power of these methods in astronomical applications-we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈ 1.3 × 10-4Z⨀, an increase of 1.2 × 10-2 in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.

  11. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  12. Sensitivity Analysis of Linear Elastic Cracked Structures Using Generalized Finite Element Method

    NASA Astrophysics Data System (ADS)

    Pal, Mahendra Kumar; Rajagopal, Amirtham

    2014-09-01

    In this work, a sensitivity analysis of linear elastic cracked structures using two-scale Generalized Finite Element Method (GFEM) is presented. The method is based on computation of material derivatives, mutual potential energies, and direct differentiation. In a computational setting, the discrete form of the mutual potential energy release rate is simple and easy to calculate, as it only requires the multiplication of the displacement vectors and stiffness sensitivity matrices. By judiciously choosing the velocity field, the method only requires displacement response in a sub-domain close to the crack tip, thus making the method computationally efficient. The method thus requires an exact computation of displacement response in a sub-domain close to the crack tip. To this end, in this study we have used a two-scale GFEM for sensitivity analysis. GFEM is based on the enrichment of the classical finite element approximation. These enrichment functions incorporate the discontinuity response in the domain. Three numerical examples which comprise mode-I and mixed mode deformations are presented to evaluate the accuracy of the fracture parameters calculated by the proposed method.

  13. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  14. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  15. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    SciTech Connect

    Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-05-15

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  16. Sparse and redundant representations for inverse problems and recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.

    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed

  17. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  18. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, G.G.; Edwards, T.C., Jr.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  19. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  20. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca. PMID:26963062

  1. Sparse subspace clustering: algorithm, theory, and applications.

    PubMed

    Elhamifar, Ehsan; Vidal, René

    2013-11-01

    Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering. PMID:24051734

  2. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  3. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  4. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  5. Optical sparse aperture imaging.

    PubMed

    Miller, Nicholas J; Dierking, Matthew P; Duncan, Bradley D

    2007-08-10

    The resolution of a conventional diffraction-limited imaging system is proportional to its pupil diameter. A primary goal of sparse aperture imaging is to enhance resolution while minimizing the total light collection area; the latter being desirable, in part, because of the cost of large, monolithic apertures. Performance metrics are defined and used to evaluate several sparse aperture arrays constructed from multiple, identical, circular subapertures. Subaperture piston and/or tilt effects on image quality are also considered. We selected arrays with compact nonredundant autocorrelations first described by Golay. We vary both the number of subapertures and their relative spacings to arrive at an optimized array. We report the results of an experiment in which we synthesized an image from multiple subaperture pupil fields by masking a large lens with a Golay array. For this experiment we imaged a slant edge feature of an ISO12233 resolution target in order to measure the modulation transfer function. We note the contrast reduction inherent in images formed through sparse aperture arrays and demonstrate the use of a Wiener-Helstrom filter to restore contrast in our experimental images. Finally, we describe a method to synthesize images from multiple subaperture focal plane intensity images using a phase retrieval algorithm to obtain estimates of subaperture pupil fields. Experimental results from synthesizing an image of a point object from multiple subaperture images are presented, and weaknesses of the phase retrieval method for this application are discussed. PMID:17694146

  6. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  7. General methods for determining the linear stability of coronal magnetic fields

    NASA Technical Reports Server (NTRS)

    Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.

    1988-01-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.

  8. The elastostatic plane strain mode I crack tip stress and displacement fields in a generalized linear neo-Hookean elastomer

    NASA Astrophysics Data System (ADS)

    Begley, Matthew R.; Creton, Costantino; McMeeking, Robert M.

    2015-11-01

    A general asymptotic plane strain crack tip stress field is constructed for linear versions of neo-Hookean materials, which spans a wide variety of special cases including incompressible Mooney elastomers, the compressible Blatz-Ko elastomer, several cases of the Ogden constitutive law and a new result for a compressible linear neo-Hookean material. The nominal stress field has dominant terms that have a square root singularity with respect to the distance of material points from the crack tip in the undeformed reference configuration. At second order, there is a uniform tension parallel to the crack. The associated displacement field in plane strain at leading order has dependence proportional to the square root of the same coordinate. The relationship between the amplitude of the crack tip singularity (a stress intensity factor) and the plane strain energy release rate is outlined for the general linear material, with simplified relationships presented for notable special cases.

  9. On the dynamics of canopy resistance: Generalized linear estimation and relationships with primary micrometeorological variables

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis

    2010-08-01

    The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the

  10. Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao

    2016-02-01

    We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and

  11. Applications of multivariate modeling to neuroimaging group analysis: a comprehensive alternative to univariate general linear model.

    PubMed

    Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W

    2014-10-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in

  12. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  13. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    SciTech Connect

    Shirokov, M. E.

    2013-11-15

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  14. Hybrid approximate message passing for generalized group sparsity

    NASA Astrophysics Data System (ADS)

    Fletcher, Alyson K.; Rangan, Sundeep

    2013-09-01

    We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.

  15. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  16. Generalization of the relaxation method for the inverse solution of nonlinear and linear transfer equations

    NASA Technical Reports Server (NTRS)

    Chahine, M. T.

    1977-01-01

    A mapping transformation is derived for the inverse solution of nonlinear and linear integral equations of the types encountered in remote sounding studies. The method is applied to the solution of specific problems for the determination of the thermal and composition structure of planetary atmospheres from a knowledge of their upwelling radiance.

  17. SPARSKIT: A basic tool kit for sparse matrix computations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1990-01-01

    Presented here are the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate the exchange of software and data between researchers in sparse matrix computations. The starting point is the Harwell/Boeing collection of matrices for which the authors provide a number of tools. Among other things, the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, and performing linear algebra operations with sparse matrices.

  18. Sparse Image Format

    Energy Science and Technology Software Center (ESTSC)

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. Itmore » supports large files (> 2GB) and is designed to build in Windows and Linux environments.« less

  19. Sparse Image Format

    SciTech Connect

    Eads, Damian Ryan

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. It supports large files (> 2GB) and is designed to build in Windows and Linux environments.

  20. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  1. TASMANIAN Sparse Grids Module

    Energy Science and Technology Software Center (ESTSC)

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library thatmore » provides a command line interface via text files ad a MATLAB interface via the command line tool.« less

  2. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  3. Sparse matrix methods based on orthogonality and conjugacy

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1973-01-01

    A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.

  4. Linear and Nonlinear Optical Properties in Spherical Quantum Dots: Generalized Hulthén Potential

    NASA Astrophysics Data System (ADS)

    Onyeaju, M. C.; Idiodi, J. O. A.; Ikot, A. N.; Solaimani, M.; Hassanabadi, H.

    2016-05-01

    In this work, we studied the optical properties of spherical quantum dots confined in Hulthén potential with the appropriate centrifugal term included. The approximate solution of the bound state and wave functions were obtained from the Schrödinger wave equation by applying the factorization method. Also, we have used the density matrix formalism to investigate the linear and third-order nonlinear absorption coefficient and refractive index changes.

  5. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  6. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    PubMed Central

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  7. Generalized linear Boltzmann equation, describing non-classical particle transport, and related asymptotic solutions for small mean free paths

    NASA Astrophysics Data System (ADS)

    Rukolaine, Sergey A.

    2016-05-01

    In classical kinetic models a particle free path distribution is exponential, but this is more likely to be an exception than a rule. In this paper we derive a generalized linear Boltzmann equation (GLBE) for a general free path distribution in the framework of Alt's model. In the case that the free path distribution has at least first and second finite moments we construct an asymptotic solution to the initial value problem for the GLBE for small mean free paths. In the special case of the one-speed transport problem the asymptotic solution results in a diffusion approximation to the GLBE.

  8. Use of a generalized linear model to evaluate range forage production estimates

    NASA Astrophysics Data System (ADS)

    Mitchell, John E.; Joyce, Linda A.

    1986-05-01

    Interdisciplinary teams have been used in federal land planning and in the private sector to reach consensus on the environmental impact of management. When a large data base is constructed, verifiability of the accuracy of the coded estimates and the underlying assumptions becomes a problem. A mechanism is provided by the use of a linear statistical model to evaluate production coefficients in terms of errors in coding and underlying assumptions. The technique can be used to evaluate other intuitive models depicting natural resource production in relation to prescribed variables, such as site factors or secondary succession.

  9. Sparse decomposition learning based dynamic MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Zhu, Peifei; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    Dynamic MRI is widely used for many clinical exams but slow data acquisition becomes a serious problem. The application of Compressed Sensing (CS) demonstrated great potential to increase imaging speed. However, the performance of CS is largely depending on the sparsity of image sequence in the transform domain, where there are still a lot to be improved. In this work, the sparsity is exploited by proposed Sparse Decomposition Learning (SDL) algorithm, which is a combination of low-rank plus sparsity and Blind Compressed Sensing (BCS). With this decomposition, only sparsity component is modeled as a sparse linear combination of temporal basis functions. This enables coefficients to be sparser and remain more details of dynamic components comparing learning the whole images. A reconstruction is performed on the undersampled data where joint multicoil data consistency is enforced by combing Parallel Imaging (PI). The experimental results show the proposed methods decrease about 15~20% of Mean Square Error (MSE) compared to other existing methods.

  10. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  11. Compressed Sampling of Spectrally Sparse Signals Using Sparse Circulant Matrices

    NASA Astrophysics Data System (ADS)

    Xu, Guangjie; Wang, Huali; Sun, Lei; Zeng, Weijun; Wang, Qingguo

    2014-11-01

    Circulant measurement matrices constructed by partial cyclically shifts of one generating sequence, are easier to be implemented in hardware than widely used random measurement matrices; however, the diminishment of randomness makes it more sensitive to signal noise. Selecting a deterministic sequence with optimal periodic autocorrelation property (PACP) as generating sequence, would enhance the noise robustness of circulant measurement matrix, but this kind of deterministic circulant matrices only exists in the fixed periodic length. Actually, the selection of generating sequence doesn't affect the compressive performance of circulant measurement matrix but the subspace energy in spectrally sparse signals. Sparse circulant matrices, whose generating sequence is a sparse sequence, could keep the energy balance of subspaces and have similar noise robustness to deterministic circulant matrices. In addition, sparse circulant matrices have no restriction on length and are more suitable for the compressed sampling of spectrally sparse signals at arbitrary dimensionality.

  12. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  13. The Exact Solution for Linear Thermoelastic Axisymmetric Deformations of Generally Laminated Circular Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Schultz, Marc R.

    2012-01-01

    A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.

  14. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the non-self-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order model for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  15. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  16. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  17. Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob

    2007-01-01

    For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.

  18. FIDDLE: A Computer Code for Finite Difference Development of Linear Elasticity in Generalized Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2005-01-01

    A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.

  19. Generalized linear stability of non-inertial rimming flow in a rotating horizontal cylinder.

    PubMed

    Aggarwal, Himanshu; Tiwari, Naveen

    2015-10-01

    The stability of a thin film of viscous liquid inside a horizontally rotating cylinder is studied using modal and non-modal analysis. The equation governing the film thickness is derived within lubrication approximation and up to first order in aspect ratio (average film thickness to radius of the cylinder). Effect of gravity, viscous stress and capillary pressure are considered in the model. Steady base profiles are computed in the parameter space of interest that are uniform in the axial direction. A linear stability analysis is performed on these base profiles to study their stability to axial perturbations. The destabilizing behavior of aspect ratio and surface tension is demonstrated which is attributed to capillary instability. The transient growth that gives maximum amplification of any initial disturbance and the pseudospectra of the stability operator are computed. These computations reveal weak effect of non-normality of the operator and the results of eigenvalue analysis are recovered after a brief transient period. Results from nonlinear simulations are also presented which also confirm the validity of the modal analysis for the flow considered in this study. PMID:26496740

  20. Automatic anatomy recognition of sparse objects

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Udupa, Jayaram K.; Odhner, Dewey; Wang, Huiqian; Tong, Yubing; Torigian, Drew A.

    2015-03-01

    A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object's exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.

  1. Stability and bifurcation analysis of oscillators with piecewise-linear characteristics - A general approach

    NASA Technical Reports Server (NTRS)

    Noah, S. T.; Kim, Y. B.

    1991-01-01

    A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.

  2. Linear response to perturbation of nonexponential renewal process: A generalized master equation approach

    NASA Astrophysics Data System (ADS)

    Sokolov, I. M.

    2006-06-01

    The work by Barbi, Bologna, and Grigolini [Phys. Rev. Lett. 95, 220601 (2005)] discusses a response to alternating external field of a non-Markovian two-state system, where the waiting time between the two attempted changes of state follows a power law. It introduced a new instrument for description of such situations based on a stochastic master equation with reset. In the present Brief Report we provide an alternative description of the situation within the framework of a generalized master equation. The results of our analytical approach are corroborated by direct numerical simulations of the system.

  3. A generalized method of converting CT image to PET linear attenuation coefficient distribution in PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Wu, Li-Wei; Wei, Le; Gao, Juan; Sun, Cui-Li; Chai, Pei; Li, Dao-Wu

    2014-02-01

    The accuracy of attenuation correction in positron emission tomography scanners depends mainly on deriving the reliable 511-keV linear attenuation coefficient distribution in the scanned objects. In the PET/CT system, the linear attenuation distribution is usually obtained from the intensities of the CT image. However, the intensities of the CT image relate to the attenuation of photons in an energy range of 40 keV-140 keV. Before implementing PET attenuation correction, the intensities of CT images must be transformed into the PET 511-keV linear attenuation coefficients. However, the CT scan parameters can affect the effective energy of CT X-ray photons and thus affect the intensities of the CT image. Therefore, for PET/CT attenuation correction, it is crucial to determine the conversion curve with a given set of CT scan parameters and convert the CT image into a PET linear attenuation coefficient distribution. A generalized method is proposed for converting a CT image into a PET linear attenuation coefficient distribution. Instead of some parameter-dependent phantom calibration experiments, the conversion curve is calculated directly by employing the consistency conditions to yield the most consistent attenuation map with the measured PET data. The method is evaluated with phantom experiments and small animal experiments. In phantom studies, the estimated conversion curve fits the true attenuation coefficients accurately, and accurate PET attenuation maps are obtained by the estimated conversion curves and provide nearly the same correction results as the true attenuation map. In small animal studies, a more complicated attenuation distribution of the mouse is obtained successfully to remove the attenuation artifact and improve the PET image contrast efficiently.

  4. Linear models of coregionalization for multivariate lattice data: a general framework for coregionalized multivariate CAR models.

    PubMed

    MacNab, Ying C

    2016-09-20

    We present a general coregionalization framework for developing coregionalized multivariate Gaussian conditional autoregressive (cMCAR) models for Bayesian analysis of multivariate lattice data in general and multivariate disease mapping data in particular. This framework is inclusive of cMCARs that facilitate flexible modelling of spatially structured symmetric or asymmetric cross-variable local interactions, allowing a wide range of separable or non-separable covariance structures, and symmetric or asymmetric cross-covariances, to be modelled. We present a brief overview of established univariate Gaussian conditional autoregressive (CAR) models for univariate lattice data and develop coregionalized multivariate extensions. Classes of cMCARs are presented by formulating precision structures. The resulting conditional properties of the multivariate spatial models are established, which cast new light on cMCARs with richly structured covariances and cross-covariances of different spatial ranges. The related methods are illustrated via an in-depth Bayesian analysis of a Minnesota county-level cancer data set. We also bring a new dimension to the traditional enterprize of Bayesian disease mapping: estimating and mapping covariances and cross-covariances of the underlying disease risks. Maps of covariances and cross-covariances bring to light spatial characterizations of the cMCARs and inform on spatial risk associations between areas and diseases. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27091685

  5. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  6. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  7. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  8. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  9. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  10. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  11. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  12. Completeness for sparse potential scattering

    SciTech Connect

    Shen, Zhongwei

    2014-01-15

    The present paper is devoted to the scattering theory of a class of continuum Schrödinger operators with deterministic sparse potentials. We first establish the limiting absorption principle for both modified free resolvents and modified perturbed resolvents. This actually is a weak form of the classical limiting absorption principle. We then prove the existence and completeness of local wave operators, which, in particular, imply the existence of wave operators. Under additional assumptions on the sparse potential, we prove the completeness of wave operators. In the context of continuum Schrödinger operators with sparse potentials, this paper gives the first proof of the completeness of wave operators.

  13. Point particle binary system with components of different masses in the linear regime of the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M, C. E.; de Araujo, J. C. N.

    2016-05-01

    A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.

  14. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum

    PubMed Central

    Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638

  15. A generalized linear mixed model for longitudinal binary data with a marginal logit link function

    PubMed Central

    Parzen, Michael; Ghosh, Souparno; Lipsitz, Stuart; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Mallick, Bani K.; Ibrahim, Joseph G.

    2010-01-01

    Summary Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis (2003) proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper, we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall’s τ. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women. PMID:21532998

  16. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    SciTech Connect

    Fowler, Michael James

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  17. Generalized Linear Mixed Models for Binary Data: Are Matching Results from Penalized Quasi-Likelihood and Numerical Integration Less Biased?

    PubMed Central

    Benedetti, Andrea; Platt, Robert; Atherton, Juli

    2014-01-01

    Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249

  18. Application of a generalized linear mixed model to analyze mixture toxicity: survival of brown trout affected by copper and zinc.

    PubMed

    Iwasaki, Yuichi; Brinkman, Stephen F

    2015-04-01

    Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. PMID:25524054

  19. Analyzing Sparse Dictionaries for Online Learning With Kernels

    NASA Astrophysics Data System (ADS)

    Honeine, Paul

    2015-12-01

    Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.

  20. A sparse embedding and least variance encoding approach to hashing.

    PubMed

    Zhu, Xiaofeng; Zhang, Lei; Huang, Zi

    2014-09-01

    Hashing is becoming increasingly important in large-scale image retrieval for fast approximate similarity search and efficient data storage. Many popular hashing methods aim to preserve the kNN graph of high dimensional data points in the low dimensional manifold space, which is, however, difficult to achieve when the number of samples is big. In this paper, we propose an effective and efficient hashing approach by sparsely embedding a sample in the training sample space and encoding the sparse embedding vector over a learned dictionary. To this end, we partition the sample space into clusters via a linear spectral clustering method, and then represent each sample as a sparse vector of normalized probabilities that it falls into its several closest clusters. This actually embeds each sample sparsely in the sample space. The sparse embedding vector is employed as the feature of each sample for hashing. We then propose a least variance encoding model, which learns a dictionary to encode the sparse embedding feature, and consequently binarize the coding coefficients as the hash codes. The dictionary and the binarization threshold are jointly optimized in our model. Experimental results on benchmark data sets demonstrated the effectiveness of the proposed approach in comparison with state-of-the-art methods. PMID:24968174

  1. Simplified Linear Equation Solvers users manual

    SciTech Connect

    Gropp, W. ); Smith, B. )

    1993-02-01

    The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.

  2. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  3. A Bayesian semiparametric model for bivariate sparse longitudinal data.

    PubMed

    Das, Kiranmoy; Li, Runze; Sengupta, Subhajit; Wu, Rongling

    2013-09-30

    Mixed-effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model-based approach and estimate the error-covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random-effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. PMID:23553747

  4. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  5. A re-formulation of generalized linear mixed models to fit family data in genetic association studies

    PubMed Central

    Wang, Tao; He, Peng; Ahn, Kwang Woo; Wang, Xujing; Ghosh, Soumitra; Laud, Purushottam

    2015-01-01

    The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via “proc nlmixed” and “proc glimmix” in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC). PMID:25873936

  6. A classification-and-reconstruction approach for a single image super-resolution by a sparse representation

    NASA Astrophysics Data System (ADS)

    Fan, YingYing; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A sparse representation is known as a very powerful tool to solve image reconstruction problem such as denoising and the single image super-resolution. In the sparse representation, it is assumed that an image patch or data can be approximated by a linear combination of a few bases selected from a given dictionary. A single overcomplete dictionary is usually learned with training patches. Dictionary learning methods almost are concerned about building a general over-complete dictionary on the assumption that the bases in dictionary can represent everything. However, using more appropriate dictionary, the sparse representation of patch can obtain better results. In this paper, we propose a classification-and-reconstruction approach with multiple dictionaries. Before learning dictionary for reconstruction, some representative bases can be used to classify all training patches from database and multiple dictionaries for reconstruction can be learned by classified patches respectively. In reconstruction phase, the patch of input image can be classified and the adaptive dictionary can be selected to use. We demonstrate that the proposed classification-and-reconstruction approach outperforms existing sparse representation with the single dictionary.

  7. An Efficient Sparse Approach for Core Flow Problems

    NASA Astrophysics Data System (ADS)

    Marti, P.; Calkins, M. A.; Aurnou, J. M.; Julien, K. A.

    2013-12-01

    Traditionally fully spectral simulations for core flows based on Chebyshev series, Fourier series and spherical harmonics do not require the solution of very large linear systems of equations to advance in time. The explicit treatment of the Coriolis term does generally lead to a large number of decoupled equations of moderate size. It is possible in this context to work with dense matrices and dense solvers. On the other hand, an implicit treatment of the Coriolis term or certain sets of asymptotically reduced equations can not be treated in this way. The time marching of these equations requires solving few very large linear systems. Dense matrices and dense solvers become prohibitively expensive due to a very high memory footprint as well as a very slow execution time. We present a numerical approach converting theses dense systems into equivalent sparse systems that can be solved efficiently. We demonstrate our approach on a set of rapidly rotating flow problems in Cartesian, cylindrical and spherical geometry and compare it to a standard approach.

  8. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  9. Content addressable systolic array for sparse matrix computation

    SciTech Connect

    Wing, O.

    1983-01-01

    A systolic array is proposed which is specifically designed to solve a system of sparse linear equations. The array consists of a number of processing elements connected in a ring. Each processing element has its own content addressable memory where the nonzero elements of the sparse matrix are stored. Matrix elements to which elementary operations are applied are extracted from the memory by content addressing. The system of equations is solved in a systolic fashion and the solution is obtained in nz+5n-2 steps where nz is the number of nonzero elements along and below the diagonal and n is the number of equations. 13 references.

  10. Accounting for Uncertainty in Confounder and Effect Modifier Selection when Estimating Average Causal Effects in Generalized Linear Models

    PubMed Central

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-01-01

    Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155

  11. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  12. Jamming and percolation in generalized models of random sequential adsorption of linear k -mers on a square lattice

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Dubinin, Dmitri O.; Laptev, Valeri V.; Vygornitskii, Nikolai V.

    2015-12-01

    The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k -mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k -mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k -mers. The second model is the cooperative sequential adsorption one where, for each new k -mer, only a restricted number of lateral contacts z with previously deposited k -mers is allowed. Deposition occurs in the case when z ≤(1 -d ) zm where zm=2 (k +1 ) is the maximum numbers of the contacts of k -mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval kmin≤k ≤kmax where the values kmin and kmax depend upon the fraction of forbidden contacts d . The value kmax decreases as d increases. A logarithmic dependence of the type log10(kmax) =a +b d , where a =4.04 ±0.22 ,b =-4.93 ±0.57 , is obtained.

  13. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods. PMID:24231870

  14. SparseMaps--A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory.

    PubMed

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison

  15. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison

  16. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-01-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present

  17. Jamming and percolation in generalized models of random sequential adsorption of linear k-mers on a square lattice.

    PubMed

    Lebovka, Nikolai I; Tarasevich, Yuri Yu; Dubinin, Dmitri O; Laptev, Valeri V; Vygornitskii, Nikolai V

    2015-12-01

    The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k-mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k-mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k-mers. The second model is the cooperative sequential adsorption one where, for each new k-mer, only a restricted number of lateral contacts z with previously deposited k-mers is allowed. Deposition occurs in the case when z≤(1-d)z(m) where z(m)=2(k+1) is the maximum numbers of the contacts of k-mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval k(min)≤k≤k(max) where the values k(min) and k(max) depend upon the fraction of forbidden contacts d. The value k(max) decreases as d increases. A logarithmic dependence of the type log(10)(k(max))=a+bd, where a=4.04±0.22,b=-4.93±0.57, is obtained. PMID:26764641

  18. The overlooked potential of generalized linear models in astronomy - III. Bayesian negative binomial regression and globular cluster populations

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.

    2015-10-01

    In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.

  19. Depth-compensated diffuse optical tomography enhanced by general linear model analysis and an anatomical atlas of human head.

    PubMed

    Tian, Fenghua; Liu, Hanli

    2014-01-15

    One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts. PMID:23859922

  20. Misconceptions in the use of the General Linear Model applied to functional MRI: a tutorial for junior neuro-imagers

    PubMed Central

    Pernet, Cyril R.

    2014-01-01

    This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622

  1. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  2. Towards obtaining spatiotemporally precise responses to continuous sensory stimuli in humans: a general linear modeling approach to EEG.

    PubMed

    Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C

    2014-08-15

    Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans. PMID:24736185

  3. Multisite multivariate modeling of daily precipitation and temperature in the Canadian Prairie Provinces using generalized linear models

    NASA Astrophysics Data System (ADS)

    Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.

    2016-02-01

    Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.

  4. Projected changes in precipitation and temperature over the Canadian Prairie Provinces using the Generalized Linear Model statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.

    2016-08-01

    In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.

  5. General characterization of Tityus fasciolatus scorpion venom. Molecular identification of toxins and localization of linear B-cell epitopes.

    PubMed

    Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C

    2015-06-01

    This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. PMID:25817000

  6. Towards robust topology of sparsely sampled data.

    PubMed

    Correa, Carlos D; Lindstrom, Peter

    2011-12-01

    Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods--a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis. PMID:22034302

  7. Amesos2 Templated Direct Sparse Solver Package

    Energy Science and Technology Software Center (ESTSC)

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  8. SparsePZ: Sparse Representation of Photometric Redshift PDFs

    NASA Astrophysics Data System (ADS)

    Carrasco Kind, Matias; Brunner, R. J.

    2015-11-01

    SparsePZ uses sparse basis representation to fully represent individual photometric redshift probability density functions (PDFs). This approach requires approximately half the parameters for the same multi-Gaussian fitting accuracy, and has the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function. Only 10-20 points per galaxy are needed to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. This basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution or accuracy.

  9. Sparse, Decorrelated Odor Coding in the Mushroom Body Enhances Learned Odor Discrimination

    PubMed Central

    Lin, Andrew C.; Bygrave, Alexei; de Calignon, Alix; Lee, Tzumin; Miesenböck, Gero

    2014-01-01

    Summary Sparse coding may be a general strategy of neural systems to augment memory capacity. In Drosophila, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. However, it remains untested how sparse coding relates to behavioral performance. Here we demonstrate that sparseness is controlled by a negative feedback circuit between Kenyon cells and the GABAergic anterior paired lateral (APL) neuron. Systematic activation and blockade of each leg of this feedback circuit show that Kenyon cells activate APL and APL inhibits Kenyon cells. Disrupting the Kenyon cell-APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories. PMID:24561998

  10. Galaxy redshift surveys with sparse sampling

    SciTech Connect

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-12-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.

  11. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  12. A revised linear ozone photochemistry parameterization for use in transport and general circulation models: multi-annual simulations

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Teyssèdre, H.

    2007-05-01

    This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of

  13. Sparse Biclustering of Transposable Data

    PubMed Central

    Tan, Kean Ming

    2013-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  14. Sparse Biclustering of Transposable Data.

    PubMed

    Tan, Kean Ming; Witten, Daniela M

    2014-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  15. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  16. Finding communities in sparse networks

    PubMed Central

    Singh, Abhinav; Humphries, Mark D.

    2015-01-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node. PMID:25742951

  17. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  18. Neonatal Atlas Construction Using Sparse Representation

    PubMed Central

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883

  19. Sparse approximation problem: how rapid simulated annealing succeeds and fails

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-03-01

    Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.

  20. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation. PMID:26974648

  1. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  2. Technical note: A significance test for data-sparse zones in scatter plots

    NASA Astrophysics Data System (ADS)

    Vetrova, V. V.; Bardsley, W. E.

    2012-04-01

    Data-sparse zones in scatter plots of hydrological variables can be of interest in various contexts. For example, a well-defined data-sparse zone may indicate inhibition of one variable by another. It is of interest therefore to determine whether data-sparse regions in scatter plots are of sufficient extent to be beyond random chance. We consider the specific situation of data-sparse regions defined by a linear internal boundary within a scatter plot defined over a rectangular region. An Excel VBA macro is provided for carrying out a randomisation-based significance test of the data-sparse region, taking into account both the within-region number of data points and the extent of the region. Example applications are given with respect to a rainfall time series from Israel and also to validation scatter plots from a seasonal forecasting model for lake inflows in New Zealand.

  3. Technical Note: A significance test for data-sparse zones in scatter plots

    NASA Astrophysics Data System (ADS)

    Vetrova, V. V.; Bardsley, W. E.

    2012-01-01

    Data-sparse zones in scatter plots of hydrological variables can be of interest in various contexts. For example, a well-defined data-sparse zone may indicate inhibition of one variable by another. It is of interest therefore to determine whether data-sparse regions in scatter plots are of sufficient extent to be beyond random chance. We consider the specific situation of data-sparse regions defined by a linear internal boundary within a scatter plot defined over a rectangular region. An Excel VBA macro is provided for carrying out a randomisation-based significance test of the data-sparse region, taking into account both the within-region number of data points and the extent of the region. Example applications are given with respect to a rainfall time series from Israel and to validation scatter plots from a seasonal forecasting model for lake inflows in New Zealand.

  4. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  5. Applicability/evaluation of flux based representations for linear/higher order elements for heat transfer in structures - Generalized gamma(T)-family

    NASA Technical Reports Server (NTRS)

    Namburu, R. R.; Tamma, K. K.

    1991-01-01

    The applicability and evaluation of a generalized gamma(T) family of flux-based representations are examined for two different thermal analysis formulations for structures and materials which exhibit no phase change effects. The so-called H-theta and theta forms are demonstrated for numerous test models and linear and higher-order elements. The results show that the theta form with flux-based representations is generally superior to traditional approaches.

  6. A General Family of Limited Information Goodness-of-Fit Statistics for Multinomial Data

    ERIC Educational Resources Information Center

    Joe, Harry; Maydeu-Olivares, Alberto

    2010-01-01

    Maydeu-Olivares and Joe (J. Am. Stat. Assoc. 100:1009-1020, "2005"; Psychometrika 71:713-732, "2006") introduced classes of chi-square tests for (sparse) multidimensional multinomial data based on low-order marginal proportions. Our extension provides general conditions under which quadratic forms in linear functions of cell residuals are…

  7. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  8. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  9. Optimized sparse-particle aerosol representations for modeling cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Fierce, Laura; McGraw, Robert

    2016-04-01

    Sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the method of moments. Given a set of moment constraints, we show how linear programming can be used to identify collections of sparse particles that approximately maximize distributional entropy. The collections of sparse particles derived from this approach reproduce CCN activity of the exact model aerosol distributions with high accuracy. Additionally, the linear programming techniques described in this study can be used to bound key aerosol properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy moment-based approach is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a new aerosol simulation scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.

  10. Tensor methods for large sparse systems of nonlinear equations

    SciTech Connect

    Bouaricha, A.; Schnabel, R.B.

    1996-12-31

    This paper introduces censor methods for solving, large sparse systems of nonlinear equations. Tensor methods for nonlinear equations were developed in the context of solving small to medium- sized dense problems. They base each iteration on a quadratic model of the nonlinear equations. where the second-order term is selected so that the model requires no more derivative or function information per iteration than standard linear model-based methods, and hardly more storage or arithmetic operations per iteration. Computational experiments on small to medium-sized problems have shown censor methods to be considerably more efficient than standard Newton-based methods, with a particularly large advantage on singular problems. This paper considers the extension of this approach to solve large sparse problems. The key issue that must be considered is how to make efficient use of sparsity in forming and solving the censor model problem at each iteration. Accomplishing this turns out to require an entirely new way of solving the tensor model that successfully exploits the sparsity of the Jacobian, whether the Jacobian is nonsingular or singular. We develop such an approach and, based upon it, an efficient tensor method for solving large sparse systems of nonlinear equations. Test results indicate that this tensor method is significantly more efficient and robust than an efficient sparse Newton-based method. in terms of iterations, function evaluations. and execution time.

  11. Spatiotemporal System Identification With Continuous Spatial Maps and Sparse Estimation.

    PubMed

    Aram, Parham; Kadirkamanathan, Visakan; Anderson, Sean R

    2015-11-01

    We present a framework for the identification of spatiotemporal linear dynamical systems. We use a state-space model representation that has the following attributes: 1) the number of spatial observation locations are decoupled from the model order; 2) the model allows for spatial heterogeneity; 3) the model representation is continuous over space; and 4) the model parameters can be identified in a simple and sparse estimation procedure. The model identification procedure we propose has four steps: 1) decomposition of the continuous spatial field using a finite set of basis functions where spatial frequency analysis is used to determine basis function width and spacing, such that the main spatial frequency contents of the underlying field can be captured; 2) initialization of states in closed form; 3) initialization of state-transition and input matrix model parameters using sparse regression-the least absolute shrinkage and selection operator method; and 4) joint state and parameter estimation using an iterative Kalman-filter/sparse-regression algorithm. To investigate the performance of the proposed algorithm we use data generated by the Kuramoto model of spatiotemporal cortical dynamics. The identification algorithm performs successfully, predicting the spatiotemporal field with high accuracy, whilst the sparse regression leads to a compact model. PMID:25647667

  12. Robust visual tracking of infrared object via sparse representation model

    NASA Astrophysics Data System (ADS)

    Ma, Junkai; Liu, Haibo; Chang, Zheng; Hui, Bin

    2014-11-01

    In this paper, we propose a robust tracking method for infrared object. We introduce the appearance model and the sparse representation in the framework of particle filter to achieve this goal. Representing every candidate image patch as a linear combination of bases in the subspace which is spanned by the target templates is the mechanism behind this method. The natural property, that if the candidate image patch is the target so the coefficient vector must be sparse, can ensure our algorithm successfully. Firstly, the target must be indicated manually in the first frame of the video, then construct the dictionary using the appearance model of the target templates. Secondly, the candidate image patches are selected in following frames and the sparse coefficient vectors of them are calculated via l1-norm minimization algorithm. According to the sparse coefficient vectors the right candidates is determined as the target. Finally, the target templates update dynamically to cope with appearance change in the tracking process. This paper also addresses the problem of scale changing and the rotation of the target occurring in tracking. Theoretic analysis and experimental results show that the proposed algorithm is effective and robust.

  13. Beam hardening correction for sparse-view CT reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2015-03-01

    Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.

  14. Evolutionary induction of sparse neural trees

    PubMed

    Zhang; Ohm; Muhlenbein

    1997-01-01

    This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest. PMID:10021759

  15. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  16. Flexible Multilayer Sparse Approximations of Matrices and Applications

    NASA Astrophysics Data System (ADS)

    Le Magoarou, Luc; Gribonval, Remi

    2016-06-01

    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems.

  17. Latent subspace sparse representation-based unsupervised domain adaptation

    NASA Astrophysics Data System (ADS)

    Shuai, Liu; Sun, Hao; Zhao, Fumin; Zhou, Shilin

    2015-12-01

    In this paper, we introduce and study a novel unsupervised domain adaptation (DA) algorithm, called latent subspace sparse representation based domain adaptation, based on the fact that source and target data that lie in different but related low-dimension subspaces. The key idea is that each point in a union of subspaces can be constructed by a combination of other points in the dataset. In this method, we propose to project the source and target data onto a common latent generalized subspace which is a union of subspaces of source and target domains and learn the sparse representation in the latent generalized subspace. By employing the minimum reconstruction error and maximum mean discrepancy (MMD) constraints, the structure of source and target domain are preserved and the discrepancy is reduced between the source and target domains and thus reflected in the sparse representation. We then utilize the sparse representation to build a weighted graph which reflect the relationship of points from the different domains (source-source, source- target, and target-target) to predict the labels of the target domain. We also proposed an efficient optimization method for the algorithm. Our method does not need to combine with any classifiers and therefore does not need train the test procedures. Various experiments show that the proposed method perform better than the competitive state of art subspace-based domain adaptation.

  18. The sparseness of neuronal responses in ferret primary visual cortex.

    PubMed

    Tolhurst, David J; Smyth, Darragh; Thompson, Ian D

    2009-02-25

    Various arguments suggest that neuronal coding of natural sensory stimuli should be sparse (i.e., individual neurons should respond rarely but should respond reliably). We examined sparseness of visual cortical neurons in anesthetized ferret to flashed natural scenes. Response behavior differed widely between neurons. The median firing rate of 4.1 impulses per second was slightly higher than predicted from consideration of metabolic load. Thirteen percent of neurons (12 of 89) responded to <5% of the images, but one-half responded to >25% of images. Multivariate analysis of the range of sparseness values showed that 67% of the variance was accounted for by differing response patterns to moving gratings. Repeat presentation of images showed that response variance for natural images exaggerated sparseness measures; variance was scaled with mean response, but with a lower Fano factor than for the responses to moving gratings. This response variability and the "soft" sparse responses (Rehn and Sommer, 2007) raise the question of what constitutes a reliable neuronal response and imply parallel signaling by multiple neurons. We investigated whether the temporal structure of responses might be reliable enough to give additional information about natural scenes. Poststimulus time histogram shape was similar for "strong" and "weak" stimuli, with no systematic change in first-spike latency with stimulus strength. The variance of first-spike latency for repeat presentations of the same image was greater than the latency variance between images. In general, responses to flashed natural scenes do not seem compatible with a sparse encoding in which neurons fire rarely but reliably. PMID:19244512

  19. Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.

    PubMed

    Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N

    2009-02-01

    Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p < 0.001) correlated (R (2) = 0.92) with 'observed' mean Field Order total soil Cd values. Field Order is determined by a given field's proximity to primary outlets from in-field irrigation channels and subsequent inter-field irrigation flows. This in turn determines Field Order in Irrigation Sequence (Field Order(IS)). Mean Field Order total soil Cd represents the mean total soil Cd (aqua regia-digested) for a given Field Order(IS). In 2004-2005, Irr-Cad was utilized to evaluate the spatial distribution of total soil Cd in a 'high-risk' area of Mae Sot District. Secondary validation on six randomly selected field groups verified that Irr-Cad predicted mean Field Order total soil Cd and was significantly (p < 0.001) correlated with the observed mean Field Order total soil Cd with R (2) values ranging from 0.89 to 0.97. The practical applicability of Irr-Cad is in its minimal input requirements, namely the classification of fields in terms of Field Order(IS), strategic sampling of all primary fields and laboratory based determination of total soil Cd (T-Cd(P)) and the use of a weighed coefficient for Cd (Coeff

  20. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms. PMID:26291045

  1. Multi-source adaptation joint kernel sparse representation for visual classification.

    PubMed

    Tao, JianWen; Hu, Wenjun; Wen, Shiting

    2016-04-01

    Most of the existing domain adaptation learning (DAL) methods relies on a single source domain to learn a classifier with well-generalized performance for the target domain of interest, which may lead to the so-called negative transfer problem. To this end, many multi-source adaptation methods have been proposed. While the advantages of using multi-source domains of information for establishing an adaptation model have been widely recognized, how to boost the robustness of the computational model for multi-source adaptation learning has only recently received attention. To address this issue for achieving enhanced performance, we propose in this paper a novel algorithm called multi-source Adaptation Regularization Joint Kernel Sparse Representation (ARJKSR) for robust visual classification problems. Specifically, ARJKSR jointly represents target dataset by a sparse linear combination of training data of each source domain in some optimal Reproduced Kernel Hilbert Space (RKHS), recovered by simultaneously minimizing the inter-domain distribution discrepancy and maximizing the local consistency, whilst constraining the observations from both target and source domains to share their sparse representations. The optimization problem of ARJKSR can be solved using an efficient alternative direction method. Under the framework ARJKSR, we further learn a robust label prediction matrix for the unlabeled instances of target domain based on the classical graph-based semi-supervised learning (GSSL) diagram, into which multiple Laplacian graphs constructed with the ARJKSR are incorporated. The validity of our method is examined by several visual classification problems. Results demonstrate the superiority of our method in comparison to several state-of-the-arts. PMID:26894961

  2. Sparse Coding for Alpha Matting

    NASA Astrophysics Data System (ADS)

    Johnson, Jubin; Varnousfaderani, Ehsan Shahrian; Cholakkal, Hisham; Rajan, Deepu

    2016-07-01

    Existing color sampling based alpha matting methods use the compositing equation to estimate alpha at a pixel from pairs of foreground (F) and background (B) samples. The quality of the matte depends on the selected (F,B) pairs. In this paper, the matting problem is reinterpreted as a sparse coding of pixel features, wherein the sum of the codes gives the estimate of the alpha matte from a set of unpaired F and B samples. A non-parametric probabilistic segmentation provides a certainty measure on the pixel belonging to foreground or background, based on which a dictionary is formed for use in sparse coding. By removing the restriction to conform to (F,B) pairs, this method allows for better alpha estimation from multiple F and B samples. The same framework is extended to videos, where the requirement of temporal coherence is handled effectively. Here, the dictionary is formed by samples from multiple frames. A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form. Quantitative and qualitative evaluations on a benchmark dataset are provided to show that the proposed method outperforms current state-of-the-art in image and video matting.

  3. Technical note: Acceleration of sparse operations for average-information REML analyses with supernodal methods and sparse-storage refinements.

    PubMed

    Masuda, Y; Aguilar, I; Tsuruta, S; Misztal, I

    2015-10-01

    The objective of this study was to remove bottlenecks generally found in a computer program for average-information REML. The refinements included improvements to setting-up mixed-model equations on a hash table with a faster hash function as sparse matrix storage, changing sparse structures in calculation of traces, and replacing a sparse matrix package using traditional methods (FSPAK) with a new package using supernodal methods (YAMS); the latter package quickly processed sparse matrices containing large, dense blocks. Comparisons included 23 models with data sets from broiler, swine, beef, and dairy cattle. Models included single-trait, multiple-trait, maternal, and random regression models with phenotypic data; selected models used genomic information in a single-step approach. Setting-up mixed model equations was completed without abnormal termination in all analyses. Calculations in traces were accelerated with a hash format, especially for models with a genomic relationship matrix, and the maximum speed was 67 times faster. Computations with YAMS were, on average, more than 10 times faster than with FSPAK and had greater advantages for large data and more complicated models including multiple traits, random regressions, and genomic effects. These refinements can be applied to general average-information REML programs. PMID:26523559

  4. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.

    PubMed

    Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2016-04-12

    Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing. PMID:27035946

  5. Discovering governing equations from data by sparse identification of nonlinear dynamical systems

    PubMed Central

    Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan

    2016-01-01

    Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing. PMID:27035946

  6. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    PubMed

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods. PMID:22279246

  7. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  8. Compressive Sensing Based Design of Sparse Tripole Arrays

    PubMed Central

    Hawes, Matthew; Liu, Wei; Mihaylova, Lyudmila

    2015-01-01

    This paper considers the problem of designing sparse linear tripole arrays. In such arrays at each antenna location there are three orthogonal dipoles, allowing full measurement of both the horizontal and vertical components of the received waveform. We formulate this problem from the viewpoint of Compressive Sensing (CS). However, unlike for isotropic array elements (single antenna), we now have three complex valued weight coefficients associated with each potential location (due to the three dipoles), which have to be simultaneously minimised. If this is not done, we may only set the weight coefficients of individual dipoles to be zero valued, rather than complete tripoles, meaning some dipoles may remain at each location. Therefore, the contributions of this paper are to formulate the design of sparse tripole arrays as an optimisation problem, and then we obtain a solution based on the minimisation of a modified l1 norm or a series of iteratively solved reweighted minimisations, which ensure a truly sparse solution. Design examples are provided to verify the effectiveness of the proposed methods and show that a good approximation of a reference pattern can be achieved using fewer tripoles than a Uniform Linear Array (ULA) of equivalent length. PMID:26690436

  9. Compressive Sensing Based Design of Sparse Tripole Arrays.

    PubMed

    Hawes, Matthew; Liu, Wei; Mihaylova, Lyudmila

    2015-01-01

    This paper considers the problem of designing sparse linear tripole arrays. In such arrays at each antenna location there are three orthogonal dipoles, allowing full measurement of both the horizontal and vertical components of the received waveform. We formulate this problem from the viewpoint of Compressive Sensing (CS). However, unlike for isotropic array elements (single antenna), we now have three complex valued weight coefficients associated with each potential location (due to the three dipoles), which have to be simultaneously minimised. If this is not done, we may only set the weight coefficients of individual dipoles to be zero valued, rather than complete tripoles, meaning some dipoles may remain at each location. Therefore, the contributions of this paper are to formulate the design of sparse tripole arrays as an optimisation problem, and then we obtain a solution based on the minimisation of a modified l1 norm or a series of iteratively solved reweighted minimisations, which ensure a truly sparse solution. Design examples are provided to verify the effectiveness of the proposed methods and show that a good approximation of a reference pattern can be achieved using fewer tripoles than a Uniform Linear Array (ULA) of equivalent length. PMID:26690436

  10. Towards robust and effective shape modeling: sparse shape composition.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2012-01-01

    Organ shape plays an important role in various clinical practices, e.g., diagnosis, surgical planning and treatment evaluation. It is usually derived from low level appearance cues in medical images. However, due to diseases and imaging artifacts, low level appearance cues might be weak or misleading. In this situation, shape priors become critical to infer and refine the shape derived by image appearances. Effective modeling of shape priors is challenging because: (1) shape variation is complex and cannot always be modeled by a parametric probability distribution; (2) a shape instance derived from image appearance cues (input shape) may have gross errors; and (3) local details of the input shape are difficult to preserve if they are not statistically significant in the training data. In this paper we propose a novel Sparse Shape Composition model (SSC) to deal with these three challenges in a unified framework. In our method, a sparse set of shapes in the shape repository is selected and composed together to infer/refine an input shape. The a priori information is thus implicitly incorporated on-the-fly. Our model leverages two sparsity observations of the input shape instance: (1) the input shape can be approximately represented by a sparse linear combination of shapes in the shape repository; (2) parts of the input shape may contain gross errors but such errors are sparse. Our model is formulated as a sparse learning problem. Using L1 norm relaxation, it can be solved by an efficient expectation-maximization (EM) type of framework. Our method is extensively validated on two medical applications, 2D lung localization in X-ray images and 3D liver segmentation in low-dose CT scans. Compared to state-of-the-art methods, our model exhibits better performance in both studies. PMID:21963296

  11. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  12. Image fusion using sparse overcomplete feature dictionaries

    SciTech Connect

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. A performance study of sparse Cholesky factorization on INTEL iPSC/860

    NASA Technical Reports Server (NTRS)

    Zubair, M.; Ghose, M.

    1992-01-01

    The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.

  14. Accelerated Gibbs Sampling for Infinite Sparse Factor Analysis

    SciTech Connect

    Andrzejewski, D M

    2011-09-12

    The Indian Buffet Process (IBP) gives a probabilistic model of sparse binary matrices with an unbounded number of columns. This construct can be used, for example, to model a fixed numer of observed data points (rows) associated with an unknown number of latent features (columns). Markov Chain Monte Carlo (MCMC) methods are often used for IBP inference, and in this technical note, we provide a detailed review of the derivations of collapsed and accelerated Gibbs samplers for the linear-Gaussian infinite latent feature model. We also discuss and explain update equations for hyperparameter resampling in a 'full Bayesian' treatment and present a novel slice sampler capable of extending the accelerated Gibbs sampler to the case of infinite sparse factor analysis by allowing the use of real-valued latent features.

  15. Sparse Canonical Correlation Analysis: New Formulation and Algorithm.

    PubMed

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-05-24

    In this paper, we study canonical correlation analysis (CCA), which has become a powerful tool in multivariate data analysis for finding the correlations between two sets of multidimensional variables. The main contributions of the paper are: (i) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem; (ii) to obtain the explicit characterization of all solutions for the multiple CCA problem even the covariance matrices are singular; (iii) to develop a new sparse CCA algorithm; and (iv) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real world data sets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms. PMID:23712996

  16. Sparse canonical correlation analysis: new formulation and algorithm.

    PubMed

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-12-01

    In this paper, we study canonical correlation analysis (CCA), which is a powerful tool in multivariate data analysis for finding the correlation between two sets of multidimensional variables. The main contributions of the paper are: 1) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem, 2) to obtain the explicit characterization for all solutions of the multiple CCA problem even when the corresponding covariance matrices are singular, 3) to develop a new sparse CCA algorithm, and 4) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real-world datasets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms. PMID:24136440

  17. A parallel sparse algorithm targeting arterial fluid mechanics computations

    NASA Astrophysics Data System (ADS)

    Manguoglu, Murat; Takizawa, Kenji; Sameh, Ahmed H.; Tezduyar, Tayfun E.

    2011-09-01

    Iterative solution of large sparse nonsymmetric linear equation systems is one of the numerical challenges in arterial fluid-structure interaction computations. This is because the fluid mechanics parts of the fluid + structure block of the equation system that needs to be solved at every nonlinear iteration of each time step corresponds to incompressible flow, the computational domains include slender parts, and accurate wall shear stress calculations require boundary layer mesh refinement near the arterial walls. We propose a hybrid parallel sparse algorithm, domain-decomposing parallel solver (DDPS), to address this challenge. As the test case, we use a fluid mechanics equation system generated by starting with an arterial shape and flow field coming from an FSI computation and performing two time steps of fluid mechanics computation with a prescribed arterial shape change, also coming from the FSI computation. We show how the DDPS algorithm performs in solving the equation system and demonstrate the scalability of the algorithm.

  18. Parallel sparse matrix computations: Wavefront minimization of sparse matrices. Final report for the period ending June 14, 1998

    SciTech Connect

    Pothen, A.

    1999-02-01

    Gary Kumfert and Alex Pothen have improved the quality and run time of two ordering algorithms for minimizing the wavefront and envelope size of sparse matrices and graphs. These algorithms compute orderings for irregular data structures (e.g., unstructured meshes) that reduce the number of cache misses on modern workstation architectures. They have completed the implementation of a parallel solver for sparse, symmetric indefinite systems for distributed memory computers such as the IBM SP-2. The indefiniteness requires one to incorporate block pivoting (2 by 2 blocks) in the algorithm, thus demanding dynamic, parallel data structures. This is the first reported parallel solver for the indefinite problem. Direct methods for solving systems of linear equations employ sophisticated combinatorial and algebraic algorithms that contribute to software complexity, and hence it is natural to consider object-oriented design (OOD) in this context. The authors have continued to create software for solving sparse systems of linear equations by direct methods employing OOD. Fast computation of robust preconditioners is a priority for solving large systems of equations on unstructured grids and in other applications. They have developed new algorithms and software that can compute incomplete factorization preconditioners for high level fill in time proportional to the number of floating point operations and memory accesses.

  19. Non-linear oscillation of inter-connected satellites system under the combined influence of the solar radiation pressure and dissipative force of general nature

    NASA Astrophysics Data System (ADS)

    Sharma, S.; Narayan, A.

    2001-06-01

    The non-linear oscillation of inter-connected satellites system about its equilibrium position in the neighabourhood of main resonance ??=3D 1, under the combined effects of the solar radiation pressure and the dissipative forces of general nature has been discussed. It is found that the oscillation of the system gets disturbed when the frequency of the natural oscillation approaches the resonance frequency.

  20. Sparse and stable Markowitz portfolios

    PubMed Central

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-01-01

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537

  1. Multiple kernel sparse representations for supervised and unsupervised learning.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2014-07-01

    In complex visual recognition tasks, it is typical to adopt multiple descriptors, which describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods. PMID:24833593

  2. Feature selection and multi-kernel learning for sparse representation on a manifold.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. PMID:24333479

  3. Index statistical properties of sparse random graphs

    NASA Astrophysics Data System (ADS)

    Metz, F. L.; Stariolo, Daniel A.

    2015-10-01

    Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.

  4. Multivariate General Linear Models (MGLM) on Riemannian Manifolds with Applications to Statistical Analysis of Diffusion Weighted Images

    PubMed Central

    Kim, Hyunwoo J.; Adluru, Nagesh; Collins, Maxwell D.; Chung, Moo K.; Bendlin, Barbara B.; Johnson, Sterling C.; Davidson, Richard J.; Singh, Vikas

    2014-01-01

    Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., (xi, yi) pairs, are Euclidean is well studied. The setting where yi is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, xi ∈ R, against a manifold-valued variable, yi ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression —a manifold-valued dependent variable against multiple independent variables, i.e., f : Rn → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL(n)/O(n) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm. PMID:25580070

  5. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  6. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  7. Dictionary construction in sparse methods for image restoration

    SciTech Connect

    Wohlberg, Brendt

    2010-01-01

    Sparsity-based methods have achieved very good performance in a wide variety of image restoration problems, including denoising, inpainting, super-resolution, and source separation. These methods are based on the assumption that the image to be reconstructed may be represented as a superposition of a few known components, and the appropriate linear combination of components is estimated by solving an optimization such as Basis Pursuit De-Noising (BPDN). Considering that the K-SVD constructs a dictionary which has been optimised for mean performance over a training set, it is not too surprising that better performance can be achieved by selecting a custom dictionary for each individual block to be reconstructed. The nearest neighbor dictionary construction can be understood geometrically as a method for estimating the local projection into the manifold of image blocks, whereas the K-SVD dictionary makes more sense within a source-coding framework (it is presented as a generalization of the k-means algorithm for constructing a VQ codebook), is therefore, it could be argued, less appropriate in principle, for reconstruction problems. One can, of course, motivate the use of the K-SVD in reconstruction application on practical grounds, avoiding the computational expense of constructing a different dictionary for each block to be denoised. Since the performance of the nearest neighbor dictionary decreases when the dictionary becomes sufficiently large, this method is also superior to the approach of utilizing the entire training set as a dictionary (and this can also be understood within the image block manifold model). In practical terms, the tradeoff is between the computational cost of a nearest neighbor search (which can be achieved very efficiently), or of increased cost at the sparse optimization.

  8. The method of realizing the three-dimension positioning based on linear CCD sensor in general DSP chip.

    PubMed

    Wu, Jian; Wen, Qiuting

    2008-01-01

    Optical positioning system is an important part in the computer aided surgery system. Under the previous research of the three linear CCD positioning system prototype, this paper proposed a new way to implement three-dimensional coordinates reconstruction of a marker in the digital signal processor while not in a computer as before. And the experiments were designed to calculate the markers' three dimensional coordinates in the DSP chip and the computer respectively, the results of the three dimensional coordinates' reconstruction showed that the calculation precision in DSP chip and the computer had no difference within 0.01mm error limit. Furthermore, the method that the three dimensional coordinates' reconstruction implemented in the DSP chip can improve the stability of the optical positioning system, and to the greatest extent to increase the calculation independent of hardware, while not depend on computer processing as before. PMID:19163161

  9. Sparse Reconstruction Techniques in Magnetic Resonance Imaging: Methods, Applications, and Challenges to Clinical Adoption.

    PubMed

    Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-06-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed. PMID:27003227

  10. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  11. Imputation of Continuous Tree Suitability over the Continental United States from Sparse Measurements Using Associative Clustering

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Kumar, J.; Hoffman, F. M.; Potter, K. M.; Mills, R. T.

    2012-12-01

    Up-scaling from sparse measurements to a continuous raster of estimated values is a common problem in Earth System Science. We present a new general-purpose empirical imputation method based on associative clustering, which associates sparse measurements of dependent variables with particular multivariate clustered combinations of the independent variables, and then uses several methods to estimate values for unmeasured clusters, based on directional proximity in multidimensional data space, at both the cluster and map cell levels of resolution. We demonstrate this new imputation tool on tree species range distribution maps, which describe the suitable extent and expected growth performance of a particular tree species over a wide area. Range maps having continuous estimates of tree growth performance are more useful than more classical tree range maps that simply show binary occurence suitability. The USDA Forest Service Forest Inventory Assessment (FIA) plots provide information about the occurence and growth performance for various tree species across the US, but such measurements are limited to FIA plots. Using Associative Clustering, we scale up the discontinuous FIA Inventory growth measurements into continuous maps that show the expected growth and suitabilty for individual tree species covering the Continental United States. A multivariate cluster analysis was applied to global output from a General Circulation Model (GCM) consisting of 17 variables downscaled to 4km2 resolution. Present global growing conditions were divided into 30 thousand relatively homogeneous ecoregions describing climatic and topographic conditions. At every mapcell a multi-linear regression was applied in 17 dimensional hyperspace to derive the suitability of a tree species where not measured using the forest inventory data. The continuous species distribution maps obtained were compared and validated against existing tree range suitability maps. Associative Clustering is intended

  12. Wronskian solutions of the T-, Q- and Y-systems related to infinite dimensional unitarizable modules of the general linear superalgebra gl (M | N)

    NASA Astrophysics Data System (ADS)

    Tsuboi, Zengo

    2013-05-01

    In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.

  13. Systematic wave-equation finite difference time domain formulations for modeling electromagnetic wave-propagation in general linear and nonlinear dispersive materials

    NASA Astrophysics Data System (ADS)

    Ramadan, Omar

    2015-09-01

    In this paper, systematic wave-equation finite difference time domain (WE-FDTD) formulations are presented for modeling electromagnetic wave-propagation in linear and nonlinear dispersive materials. In the proposed formulations, the complex conjugate pole residue (CCPR) pairs model is adopted in deriving a unified dispersive WE-FDTD algorithm that allows modeling different dispersive materials, such as Debye, Drude and Lorentz, in the same manner with the minimal additional auxiliary variables. Moreover, the proposed formulations are incorporated with the wave-equation perfectly matched layer (WE-PML) to construct a material independent mesh truncating technique that can be used for modeling general frequency-dependent open region problems. Several numerical examples involving linear and nonlinear dispersive materials are included to show the validity of the proposed formulations.

  14. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  15. Methods to adjust for misclassification in the quantiles for the generalized linear model with measurement error in continuous exposures.

    PubMed

    Wang, Ching-Yun; Dieu Tapsoba, Jean De; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne

    2016-05-10

    In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75years. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26593772

  16. Polynomial approximation of functions of matrices and its application to the solution of a general system of linear equations

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1987-01-01

    During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.

  17. A general approach to the localization of antigenic determinants of a linear type in proteins of unknown primary structure.

    PubMed

    Beresten, S F; Rubikaite, B I; Kisselev, L L

    1988-10-26

    A method is proposed which permits the localization of antigenic determinants of a linear type on the polypeptide chain of a protein molecule of unknown primary structure. An antigen modified with maleic anhydride at the amino-terminal groups and at the epsilon-NH2 groups of lysine residues was subjected to partial enzymic digestion, so that the antigenic protein had, on average, less than one cleavage site per polypeptide chain. The resultant ends were labeled with 125I-labeled Bolton and Hunter reagent and the maleic group removed. The detection of the two larger labeled fragments (a longer one which still could bind to a monoclonal antibody and a shorter one which was incapable of binding) made it possible to determine the distance from the antigenic determinant to the C-terminus of the polypeptide chain. The position of the antigenic determinant could be established in more detail using partial chemical degradation of the original antigen using information about the maximal length of a fragment which has lost its ability to interact with the monoclonal antibody. The method has been applied to bovine tryptophanyl-tRNA synthetase (EC 6.1.1.2). PMID:2459255

  18. Estimating Wind Turbine Inflow Using Sparse Wind Data

    NASA Astrophysics Data System (ADS)

    Rai, Raj; Naughton, Jonathan

    2011-11-01

    An accurate spatially and temporally resolved estimation of the wind inflow under various atmospheric boundary layer stability conditions is useful for several applications relevant to wind turbines. Estimations of a wind inflow plane in a neutrally stable boundary layer using sparse data (temporally resolved but spatially sparse, and spatially resolved but temporally sparse) has shown good agreement with the original data provided by a Large Eddy Simulation. A complementary Proper Orthogonal Decomposition-Linear Stochastic Estimation (POD-LSE) approach has been used for the estimation in which the POD identifies the energetic modes of the flow that are then used in estimating the time dependent flow-field using LSE. The applicability of such an approach is considered by simulating the estimation of the wind inflow using data collected in the field. Modern remote measurement approaches, such as Lidar (Light detection and ranging), can sample the wind at the multiple locations, but cannot sufficiently resolve the inflow in space in time that is required for many wind turbine applications. Since inflow estimations using the POD-LSE approach can simultaneously provide spatial and temporal behavior, the use of the approach with field data for better understanding the characteristics of the wind inflow at a particular site under different atmospheric conditions is demonstrated. Support from a gift from BP is acknowledged.

  19. Sparse principal component analysis by choice of norm

    PubMed Central

    Luo, Ruiyan; Zhao, Hongyu

    2012-01-01

    Recent years have seen the developments of several methods for sparse principal component analysis due to its importance in the analysis of high dimensional data. Despite the demonstration of their usefulness in practical applications, they are limited in terms of lack of orthogonality in the loadings (coefficients) of different principal components, the existence of correlation in the principal components, the expensive computation needed, and the lack of theoretical results such as consistency in high-dimensional situations. In this paper, we propose a new sparse principal component analysis method by introducing a new norm to replace the usual norm in traditional eigenvalue problems, and propose an efficient iterative algorithm to solve the optimization problems. With this method, we can efficiently obtain uncorrelated principal components or orthogonal loadings, and achieve the goal of explaining a high percentage of variations with sparse linear combinations. Due to the strict convexity of the new norm, we can prove the convergence of the iterative method and provide the detailed characterization of the limits. We also prove that the obtained principal component is consistent for a single component model in high dimensional situations. As illustration, we apply this method to real gene expression data with competitive results. PMID:23524453

  20. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-06-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least squares ℓ2 sense and of a model coefficient norm in a ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multi-resolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the non-linear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  1. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  2. Sparse Spectrotemporal Coding of Sounds

    NASA Astrophysics Data System (ADS)

    Klein, David J.; König, Peter; Körding, Konrad P.

    2003-12-01

    Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.

  3. Sparse Bayesian infinite factor models

    PubMed Central

    Bhattacharya, A.; Dunson, D. B.

    2011-01-01

    We focus on sparse modelling of high-dimensional covariance matrices using Bayesian latent factor models. We propose a multiplicative gamma process shrinkage prior on the factor loadings which allows introduction of infinitely many factors, with the loadings increasingly shrunk towards zero as the column index increases. We use our prior on a parameter-expanded loading matrix to avoid the order dependence typical in factor analysis models and develop an efficient Gibbs sampler that scales well as data dimensionality increases. The gain in efficiency is achieved by the joint conjugacy property of the proposed prior, which allows block updating of the loadings matrix. We propose an adaptive Gibbs sampler for automatically truncating the infinite loading matrix through selection of the number of important factors. Theoretical results are provided on the support of the prior and truncation approximation bounds. A fast algorithm is proposed to produce approximate Bayes estimates. Latent factor regression methods are developed for prediction and variable selection in applications with high-dimensional correlated predictors. Operating characteristics are assessed through simulation studies, and the approach is applied to predict survival times from gene expression data. PMID:23049129

  4. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  5. Imaging correlography with sparse collecting apertures

    NASA Astrophysics Data System (ADS)

    Idell, Paul S.; Fienup, J. R.

    1987-01-01

    This paper investigates the possibility of implementing an imaging correlography system with sparse arrays of intensity detectors. The theory underlying the image formation process for imaging correlography is reviewed, emphasizing the spatial filtering effects that sparse collecting apertures have on the reconstructed imagery. Image recovery with sparse arrays of intensity detectors through the use of computer experiments in which laser speckle measurements are digitally simulated is then demonstrated. It is shown that the quality of imagery reconstructed using this technique is visibly enhanced when appropriate filtering techniques are applied. A performance tradeoff between collecting array redundancy and the number of speckle pattern measurements is briefly discussed.

  6. Mathematical strategies for filtering complex systems: Regularly spaced sparse observations

    SciTech Connect

    Harlim, J. Majda, A.J.

    2008-05-01

    Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.

  7. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    SciTech Connect

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  8. Quantum, classical, and hybrid QM/MM calculations in solution: general implementation of the ddCOSMO linear scaling strategy.

    PubMed

    Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J; Mennucci, Benedetta

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute. PMID:25399133

  9. Single frame blind image deconvolution by non-negative sparse matrix factorization

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Garrood, Dennis J.; Borjanović, Vesna

    2006-10-01

    Novel approach to single frame multichannel blind image deconvolution has been formulated recently as non-negative matrix factorization problem with sparseness constraints imposed on the unknown mixing vector that accounts for the case of non-sparse source image. Unlike most of the blind image deconvolution algorithms, the novel approach assumed no a priori knowledge about the blurring kernel and original image. Our contributions in this paper are: (i) we have formulated generalized non-negative matrix factorization approach to blind image deconvolution with sparseness constraints imposed on either unknown mixing vector or unknown source image; (ii) the criteria are established to distinguish whether unknown source image was sparse or not as well as to estimate appropriate sparseness constraint from degraded image itself, thus making the proposed approach completely unsupervised; (iii) an extensive experimental performance evaluation of the non-negative matrix factorization algorithm is presented on the images degraded by the blur caused by the photon sieve, out-of-focus blur with sparse and non-sparse images and blur caused by atmospheric turbulence. The algorithm is compared with the state-of-the-art single frame blind image deconvolution algorithms such as blind Richardson-Lucy algorithm and single frame multichannel independent component analysis based algorithm and non-blind image restoration algorithms such as multiplicative algebraic restoration technique and Van-Cittert algorithms. It has been experimentally demonstrated that proposed algorithm outperforms mentioned non-blind and blind image deconvolution methods.

  10. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  11. Partitioning sparse matrices with eigenvectors of graphs

    NASA Technical Reports Server (NTRS)

    Pothen, Alex; Simon, Horst D.; Liou, Kang-Pu

    1990-01-01

    The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorithms for computing separators. Finally, the time required to compute the Laplacian eigenvector is reported, and the accuracy with which the eigenvector must be computed to obtain good separators is considered. The spectral algorithm has the advantage that it can be implemented on a medium-size multiprocessor in a straightforward manner.

  12. Sparse alignment for robust tensor learning.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods. PMID:25291733

  13. Sparse principal component analysis in cancer research

    PubMed Central

    Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa

    2015-01-01

    A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835

  14. Exact power series solutions of the structure equations of the general relativistic isotropic fluid stars with linear barotropic and polytropic equations of state

    NASA Astrophysics Data System (ADS)

    Harko, T.; Mak, M. K.

    2016-09-01

    Obtaining exact solutions of the spherically symmetric general relativistic gravitational field equations describing the interior structure of an isotropic fluid sphere is a long standing problem in theoretical and mathematical physics. The usual approach to this problem consists mainly in the numerical investigation of the Tolman-Oppenheimer-Volkoff and of the mass continuity equations, which describes the hydrostatic stability of the dense stars. In the present paper we introduce an alternative approach for the study of the relativistic fluid sphere, based on the relativistic mass equation, obtained by eliminating the energy density in the Tolman-Oppenheimer-Volkoff equation. Despite its apparent complexity, the relativistic mass equation can be solved exactly by using a power series representation for the mass, and the Cauchy convolution for infinite power series. We obtain exact series solutions for general relativistic dense astrophysical objects described by the linear barotropic and the polytropic equations of state, respectively. For the polytropic case we obtain the exact power series solution corresponding to arbitrary values of the polytropic index n. The explicit form of the solution is presented for the polytropic index n=1, and for the indexes n=1/2 and n=1/5, respectively. The case of n=3 is also considered. In each case the exact power series solution is compared with the exact numerical solutions, which are reproduced by the power series solutions truncated to seven terms only. The power series representations of the geometric and physical properties of the linear barotropic and polytropic stars are also obtained.

  15. Precise Feature Based Time Scales and Frequency Decorrelation Lead to a Sparse Auditory Code

    PubMed Central

    Chen, Chen; Read, Heather L.; Escabí, Monty A.

    2012-01-01

    Sparse redundancy reducing codes have been proposed as efficient strategies for representing sensory stimuli. A prevailing hypothesis suggests that sensory representations shift from dense redundant codes in the periphery to selective sparse codes in cortex. We propose an alternative framework where sparseness and redundancy depend on sensory integration time scales and demonstrate that the central nucleus of the inferior colliculus (ICC) of cats encodes sound features by precise sparse spike trains. Direct comparisons with auditory cortical neurons demonstrate that ICC responses were sparse and uncorrelated as long as the spike train time scales were matched to the sensory integration time scales relevant to ICC neurons. Intriguingly, correlated spiking in the ICC was substantially lower than predicted by linear or nonlinear models and strictly observed for neurons with best frequencies within a “critical band,” the hallmark of perceptual frequency resolution in mammals. This is consistent with a sparse asynchronous code throughout much of the ICC and a complementary correlation code within a critical band that may allow grouping of perceptually relevant cues. PMID:22723685

  16. Multi-input multi-output underwater communications over sparse and frequency modulated acoustic channels.

    PubMed

    Ling, Jun; Zhao, Kexin; Li, Jian; Nordenvaad, Magnus Lundberg

    2011-07-01

    This paper addresses multi-input multi-output (MIMO) communications over sparse acoustic channels suffering from frequency modulations. An extension of the recently introduced SLIM algorithm, which stands for sparse learning via iterative minimization, is presented to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). The sparseness is exploited through a hierarchical Bayesian model, and because GoSLIM is user parameter free, it is easy to use in practical applications. Moreover this paper considers channel equalization and symbol detection for various MIMO transmission schemes, including both space-time block coding and spatial multiplexing, under the challenging channel conditions. The effectiveness of the proposed approaches is demonstrated using in-water experimental measurements recently acquired during WHOI09 and ACOMM10 experiments. PMID:21786895

  17. Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication

    DOE PAGESBeta

    Ballard, Grey; Druinsky, Alex; Knight, Nicholas; Schwartz, Oded

    2016-05-01

    The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less

  18. Threshold partitioning of sparse matrices and applications to Markov chains

    SciTech Connect

    Choi, Hwajeong; Szyld, D.B.

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  19. Crack growth sparse pursuit for wind turbine blade

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Yang, Zhibo; Zhang, Han; Du, Zhaohui; Chen, Xuefeng

    2015-01-01

    One critical challenge to achieving reliable wind turbine blade structural health monitoring (SHM) is mainly caused by composite laminates with an anisotropy nature and a hard-to-access property. The typical pitch-catch PZTs approach generally detects structural damage with both measured and baseline signals. However, the accuracy of imaging or tomography by delay-and-sum approaches based on these signals requires improvement in practice. Via the model of Lamb wave propagation and the establishment of a dictionary that corresponds to scatters, a robust sparse reconstruction approach for structural health monitoring comes into view for its promising performance. This paper proposes a neighbor dictionary that identifies the first crack location through sparse reconstruction and then presents a growth sparse pursuit algorithm that can precisely pursue the extension of the crack. An experiment with the goal of diagnosing a composite wind turbine blade with an artificial crack is performed, and it validates the proposed approach. The results give competitively accurate crack detection with the correct locations and extension length.

  20. Robust Reconstruction of Complex Networks from Sparse Data

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru

    2015-01-01

    Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.

  1. An Efficient Scheme for Updating Sparse Cholesky Factors

    NASA Technical Reports Server (NTRS)

    Raghavan, Padma

    2002-01-01

    Raghavan had earlier developed the software package DCSPACK which can be used for solving sparse linear systems where the coefficient matrix is symmetric and positive definite (this project was not funded by NASA but by agencies such as NSF). DSCPACK-S is the serial code and DSCPACK-P is a parallel implementation suitable for multiprocessors or networks-of-workstations with message passing using MCI. The main algorithm used is the Cholesky factorization of a sparse symmetric positive positive definite matrix A = LL(T). The code can also compute the factorization A = LDL(T). The complexity of the software arises from several factors relating to the sparsity of the matrix A. A sparse N x N matrix A has typically less that cN nonzeroes where c is a small constant. If the matrix were dense, it would have O(N2) nonzeroes. The most complicated part of such sparse Cholesky factorization relates to fill-in, i.e., zeroes in the original matrix that become nonzeroes in the factor L. An efficient implementation depends to a large extent on complex data structures and on techniques from graph theory to reduce, identify, and manage fill. DSCPACK is based on an efficient multifrontal implementation with fill-managing algorithms and implementation arising from earlier research by Raghavan and others. Sparse Cholesky factorization is typically a four step process: (1) ordering to compute a fill-reducing numbering, (2) symbolic factorization to determine the nonzero structure of L, (3) numeric factorization to compute L, and, (4) triangular solution to solve L(T)x = y and Ly = b. The first two steps are symbolic and are performed using the graph of the matrix. The numeric factorization step is of dominant cost and there are several schemes for improving performance by exploiting the nested and dense structure of groups of columns in the factor. The latter are aimed at better utilization of the cache-memory hierarchy on modem processors to prevent cache-misses and provide execution

  2. Inverse sparse tracker with a locally weighted distance metric.

    PubMed

    Wang, Dong; Lu, Huchuan; Xiao, Ziyang; Yang, Ming-Hsuan

    2015-09-01

    Sparse representation has been recently extensively studied for visual tracking and generally facilitates more accurate tracking results than classic methods. In this paper, we propose a sparsity-based tracking algorithm that is featured with two components: 1) an inverse sparse representation formulation and 2) a locally weighted distance metric. In the inverse sparse representation formulation, the target template is reconstructed with particles, which enables the tracker to compute the weights of all particles by solving only one l1 optimization problem and thereby provides a quite efficient model. This is in direct contrast to most previous sparse trackers that entail solving one optimization problem for each particle. However, we notice that this formulation with normal Euclidean distance metric is sensitive to partial noise like occlusion and illumination changes. To this end, we design a locally weighted distance metric to replace the Euclidean one. Similar ideas of using local features appear in other works, but only being supported by popular assumptions like local models could handle partial noise better than holistic models, without any solid theoretical analysis. In this paper, we attempt to explicitly explain it from a mathematical view. On that basis, we further propose a method to assign local weights by exploiting the temporal and spatial continuity. In the proposed method, appearance changes caused by partial occlusion and shape deformation are carefully considered, thereby facilitating accurate similarity measurement and model update. The experimental validation is conducted from two aspects: 1) self validation on key components and 2) comparison with other state-of-the-art algorithms. Results over 15 challenging sequences show that the proposed tracking algorithm performs favorably against the existing sparsity-based trackers and the other state-of-the-art methods. PMID:25935033

  3. Removing sparse noise from hyperspectral images with sparse and low-rank penalties

    NASA Astrophysics Data System (ADS)

    Tariyal, Snigdha; Aggarwal, Hemant Kumar; Majumdar, Angshul

    2016-03-01

    In diffraction grating, at times, there are defective pixels on the focal plane array; this results in horizontal lines of corrupted pixels in some channels. Since only a few such pixels exist, the corruption/noise is sparse. Studies on sparse noise removal from hyperspectral noise are parsimonious. To remove such sparse noise, a prior work exploited the interband spectral correlation along with intraband spatial redundancy to yield a sparse representation in transform domains. We improve upon the prior technique. The intraband spatial redundancy is modeled as a sparse set of transform coefficients and the interband spectral correlation is modeled as a rank deficient matrix. The resulting optimization problem is solved using the split Bregman technique. Comparative experimental results show that our proposed approach is better than the previous one.

  4. An empirical investigation of sparse distributed memory using discrete speech recognition

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1990-01-01

    Presented here is a step by step analysis of how the basic Sparse Distributed Memory (SDM) model can be modified to enhance its generalization capabilities for classification tasks. Data is taken from speech generated by a single talker. Experiments are used to investigate the theory of associative memories and the question of generalization from specific instances.

  5. Detecting novel genes with sparse arrays

    PubMed Central

    Haiminen, Niina; Smit, Bart; Rautio, Jari; Vitikainen, Marika; Wiebe, Marilyn; Martinez, Diego; Chee, Christine; Kunkel, Joe; Sanchez, Charles; Nelson, Mary Anne; Pakula, Tiina; Saloheimo, Markku; Penttilä, Merja; Kivioja, Teemu

    2014-01-01

    Species-specific genes play an important role in defining the phenotype of an organism. However, current gene prediction methods can only efficiently find genes that share features such as sequence similarity or general sequence characteristics with previously known genes. Novel sequencing methods and tiling arrays can be used to find genes without prior information and they have demonstrated that novel genes can still be found from extensively studied model organisms. Unfortunately, these methods are expensive and thus are not easily applicable, e.g., to finding genes that are expressed only in very specific conditions. We demonstrate a method for finding novel genes with sparse arrays, applying it on the 33.9 Mb genome of the filamentous fungus Trichoderma reesei. Our computational method does not require normalisations between arrays and it takes into account the multiple-testing problem typical for analysis of microarray data. In contrast to tiling arrays, that use overlapping probes, only one 25mer microarray oligonucleotide probe was used for every 100 b. Thus, only relatively little space on a microarray slide was required to cover the intergenic regions of a genome. The analysis was done as a by-product of a conventional microarray experiment with no additional costs. We found at least 23 good candidates for novel transcripts that could code for proteins and all of which were expressed at high levels. Candidate genes were found to neighbour ire1 and cre1 and many other regulatory genes. Our simple, low-cost method can easily be applied to finding novel species-specific genes without prior knowledge of their sequence properties. PMID:20691772

  6. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases. PMID:27295675

  7. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  8. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    PubMed

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. PMID:26773787

  9. Parallel sparse and dense information coding streams in the electrosensory midbrain

    PubMed Central

    Sproule, Michael K.J.; Metzen, Michael G.; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory information is critical for an organism’s survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  10. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  11. Sparse representation of complex MRI images.

    PubMed

    Nandakumar, Hari Prasad; Ji, Jim

    2008-01-01

    Sparse representation of images acquired from Magnet Resonance Imaging (MRI) has several potential applications. MRI is unique in that the raw images are complex. Complex wavelet transforms (CWT) can be used to produce flexible signal representations when compared to Discrete Wavelet Transform (DWT). In this work, five different schemes using CWT or DWT are tested for sparse representation of MRI images which are in the form of complex values, separate real/imaginary, or separate magnitude/phase. The experimental results on real in-vivo MRI images show that appropriate CWT, e.g., dual-tree CWT (DTCWT), can achieve sparsity better than DWT with similar Mean Square Error. PMID:19162677

  12. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  13. Sparse representation in speech signal processing

    NASA Astrophysics Data System (ADS)

    Lee, Te-Won; Jang, Gil-Jin; Kwon, Oh-Wook

    2003-11-01

    We review the sparse representation principle for processing speech signals. A transformation for encoding the speech signals is learned such that the resulting coefficients are as independent as possible. We use independent component analysis with an exponential prior to learn a statistical representation for speech signals. This representation leads to extremely sparse priors that can be used for encoding speech signals for a variety of purposes. We review applications of this method for speech feature extraction, automatic speech recognition and speaker identification. Furthermore, this method is also suited for tackling the difficult problem of separating two sounds given only a single microphone.

  14. Sparse group composition for robust left ventricular epicardium segmentation.

    PubMed

    Wang, Bing; Gu, Xiaomeng; Fan, Chonghao; Xie, Hongzhi; Zhang, Shuyang; Tian, Xuedong; Gu, Lixu

    2015-12-01

    Left ventricular (LV) epicardium segmentation in cardiac magnetic resonance images (MRIs) is still a challenging task, where the a-priori knowledge like those that incorporate the heart shape model is usually used to derive reasonable segmentation results. In this paper, we propose a sparse group composition (SGC) approach to model multiple shapes simultaneously, which extends conventional sparsity-based single shape prior modeling to incorporate a-priori spatial constraint information among multiple shapes on-the-fly. Multiple interrelated shapes (shapes of epi- and endo-cardium of myocardium in the case of LV epicardium segmentation) are regarded as a group, and sparse linear composition of training groups is computed to approximate the input group. A framework of iterative procedure of refinement based on SGC and segmentation based on deformation model is utilized for LV epicardium segmentation, in which an improved shape-constraint gradient Chan-Vese model (GCV) acted as deformation model. Compared with the standard sparsity-based single shape prior modeling, the refinement procedure has strong robust for relative gross and not much sparse errors in the input shape and the initial epicardium location can be estimated without complicated landmark detection due to modeling spatial constraint information among multiple shapes effectively. Proposed method was validated on 45 cardiac cine-MR clinical datasets and the results were compared with expert contours. The average perpendicular distance (APD) error of contours is 1.50±0.29mm, and the dice metric (DM) is 0.96±0.01. Compared to the state-of-the-art methods, our proposed approach appealed competitive segmentation performance and improved robustness. PMID:26198360

  15. Generalized Vibrational Perturbation Theory for Rotovibrational Energies of Linear, Symmetric and Asymmetric Tops: Theory, Approximations, and Automated Approaches to Deal with Medium-to-Large Molecular Systems

    PubMed Central

    Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo

    2015-01-01

    Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131

  16. VIM-based dynamic sparse grid approach to partial differential equations.

    PubMed

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions. PMID:24723805

  17. VIM-Based Dynamic Sparse Grid Approach to Partial Differential Equations

    PubMed Central

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions. PMID:24723805

  18. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  19. Sparse Multi-Static Arrays for Near-Field Millimeter-Wave Imaging

    SciTech Connect

    Sheen, David M.

    2013-12-31

    This paper describes a novel design technique for sparse multi-static linear arrays. The methods described allow the development of densely sampled linear arrays suitable for high-resolution near-field imaging that require dramatically fewer antenna and switch elements than the previous state of the art. The techniques used are related to sparse array techniques used in radio astronomy applications, but differ significantly in design due to the transmit-receive nature of the arrays, and the application to linear arrays that achieve dense uniform sampling suitable for high-resolution near-field imaging. As many as 3 to 5 or more samples per antenna can be obtained, compared to 1 sample per antenna for the current state of the art. This could dramatically reduce cost and improve performance over current active millimeter-wave imaging systems.

  20. Evaluation of cavity occurrence in the Maynardville Limestone and the Copper Ridge Dolomite at the Y-12 Plant using logistic and general linear models

    SciTech Connect

    Shevenell, L.A.; Beauchamp, J.J.

    1994-11-01

    Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.

  1. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algorithms I; sparse matrix reordering & graphy theory II; sparse matrix tool & environments II; least squares & optimization I; iterative methods & acceleration techniques II; applications II; eigenvalue computations II; least squares & optimization II; parallel algorithms II; sparse direct methods; iterative methods & acceleration techniques III; eigenvalue computations III; and sparse matrix reordering & graph theory III.

  2. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  3. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging

    PubMed Central

    Yu, Xingjian; Chen, Shuhang; Hu, Zhenghui; Liu, Meng; Chen, Yunmei; Shi, Pengcheng; Liu, Huafeng

    2015-01-01

    In dynamic Positron Emission Tomography (PET), an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets. PMID:26540274

  4. Classification of Histology Sections via Multispectral Convolutional Sparse Coding*

    PubMed Central

    Zhou, Yin; Barner, Kenneth; Spellman, Paul

    2014-01-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749

  5. Wavefront reconstruction in phase-shifting interferometry via sparse coding of amplitude and absolute phase.

    PubMed

    Katkovnik, V; Bioucas-Dias, J

    2014-08-01

    Phase-shifting interferometry is a coherent optical method that combines high accuracy with high measurement speeds. This technique is therefore desirable in many applications such as the efficient industrial quality inspection process. However, despite its advantageous properties, the inference of the object amplitude and the phase, herein termed wavefront reconstruction, is not a trivial task owing to the Poissonian noise associated with the measurement process and to the 2π phase periodicity of the observation mechanism. In this paper, we formulate the wavefront reconstruction as an inverse problem, where the amplitude and the absolute phase are assumed to admit sparse linear representations in suitable sparsifying transforms (dictionaries). Sparse modeling is a form of regularization of inverse problems which, in the case of the absolute phase, is not available to the conventional wavefront reconstruction techniques, as only interferometric phase modulo-2π is considered therein. The developed sparse modeling of the absolute phase solves two different problems: accuracy of the interferometric (wrapped) phase reconstruction and simultaneous phase unwrapping. Based on this rationale, we introduce the sparse phase and amplitude reconstruction (SPAR) algorithm. SPAR takes into full consideration the Poissonian (photon counting) measurements and uses the data-adaptive block-matching 3D (BM3D) frames as a sparse representation for the amplitude and for the absolute phase. SPAR effectiveness is documented by comparing its performance with that of competitors in a series of experiments. PMID:25121537

  6. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  7. Semi-implicit Integration Factor Methods on Sparse Grids for High-Dimensional Systems

    PubMed Central

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-01-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method. PMID:25897178

  8. A Comparative Study of Sparse Associative Memories

    NASA Astrophysics Data System (ADS)

    Gripon, Vincent; Heusel, Judith; Löwe, Matthias; Vermet, Franck

    2016-05-01

    We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about log N 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

  9. Self-Control in Sparsely Coded Networks

    NASA Astrophysics Data System (ADS)

    Dominguez, D. R. C.; Bollé, D.

    1998-03-01

    A complete self-control mechanism is proposed in the dynamics of neural networks through the introduction of a time-dependent threshold, determined in function of both the noise and the pattern activity in the network. Especially for sparsely coded models this mechanism is shown to considerably improve the storage capacity, the basins of attraction, and the mutual information content.

  10. Structured Sparse Method for Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhu, Feiyun; Wang, Ying; Xiang, Shiming; Fan, Bin; Pan, Chunhong

    2014-02-01

    Hyperspectral Unmixing (HU) has received increasing attention in the past decades due to its ability of unveiling information latent in hyperspectral data. Unfortunately, most existing methods fail to take advantage of the spatial information in data. To overcome this limitation, we propose a Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method based on the following two aspects. First, we incorporate a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space. In this way, the highly similar neighboring pixels can be grouped together. Second, the lasso penalty is employed in SS-NMF for the fact that pixels in the same manifold structure are sparsely mixed by a common set of relevant bases. These two factors act as a new structured sparse constraint. With this constraint, our method can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Experiments on real hyperspectral data sets with different noise levels demonstrate that our method outperforms the state-of-the-art methods significantly.

  11. Sparse matrix orderings for factorized inverse preconditioners

    SciTech Connect

    Benzi, M.; Tuama, M.

    1998-09-01

    The effect of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. It is shown that certain reorderings can be very beneficial both in the preconditioner construction phase and in terms of the rate of convergence of the preconditioned iteration.

  12. A Comparative Study of Sparse Associative Memories

    NASA Astrophysics Data System (ADS)

    Gripon, Vincent; Heusel, Judith; Löwe, Matthias; Vermet, Franck

    2016-07-01

    We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about log N 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

  13. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  14. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  15. Exact solutions of the Liénard- and generalized Liénard-type ordinary nonlinear differential equations obtained by deforming the phase space coordinates of the linear harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Harko, Tiberiu; Liang, Shi-Dong

    2016-06-01

    We investigate the connection between the linear harmonic oscillator equation and some classes of second order nonlinear ordinary differential equations of Li\\'enard and generalized Li\\'enard type, which physically describe important oscillator systems. By using a method inspired by quantum mechanics, and which consist on the deformation of the phase space coordinates of the harmonic oscillator, we generalize the equation of motion of the classical linear harmonic oscillator to several classes of strongly non-linear differential equations. The first integrals, and a number of exact solutions of the corresponding equations are explicitly obtained. The devised method can be further generalized to derive explicit general solutions of nonlinear second order differential equations unrelated to the harmonic oscillator. Applications of the obtained results for the study of the travelling wave solutions of the reaction-convection-diffusion equations, and of the large amplitude free vibrations of a uniform cantilever beam are also presented.

  16. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.

  17. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.

  18. On using a generalized linear model to downscale daily precipitation for the center of Portugal: an analysis of trends and extremes

    NASA Astrophysics Data System (ADS)

    Pulquério, Mário; Garrett, Pedro; Santos, Filipe Duarte; Cruz, Maria João

    2015-04-01

    Portugal is on a climate change hot spot region, where precipitation is expected to decrease with important impacts regarding future water availability. As one of the European countries affected more by droughts in the last decades, it is important to assess how future precipitation regimes will change in order to study its impacts on water resources. Due to the coarse scale of global circulation models, it is often needed to downscale climate variables to the regional or local scale using statistical and/or dynamical techniques. In this study, we tested the use of a generalized linear model, as implemented in the program GLIMCLIM, to downscale precipitation for the center of Portugal where the Tagus basin is located. An analysis of the method performance is done as well as an evaluation of future precipitation trends and extremes for the twenty-first century. Additionally, we perform the first analysis of the evolution of droughts in climate change scenarios by the Standardized Precipitation Index in the study area. Results show that GLIMCLIM is able to capture the precipitation's interannual variation and seasonality correctly. However, summer precipitation is considerably overestimated. Additionally, precipitation extremes are in general well recovered, but high daily rainfall may be overestimated, and dry spell lengths are not correctly recovered by the model. Downscaled projections show a reduction in precipitation between 19 and 28 % at the end of the century. Results indicate that precipitation extremes will decrease and the magnitude of droughts can increase up to three times in relation to the 1961-1990 period which can have strong ecological, social, and economic impacts.

  19. Sparse-Coding-Based Computed Tomography Image Reconstruction

    PubMed Central

    Yoon, Gang-Joon

    2013-01-01

    Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898

  20. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques. PMID:26510301

  1. A strategy of car detection via sparse dictionary

    NASA Astrophysics Data System (ADS)

    Jin, Guo-Qing; Dong, Ying-Hui

    2011-06-01

    In recent years there is a growing interest in the study of sparse representation for object detection. These approaches heavily depend on local salient image patches, thus weakening the global contribution to the object identification of other less informative signals.Our generic approach not only employs the informative representation by linear transform, but also keeps all the spatial dependence implied among the objects. As an example,car images can be represented using parts from a vocabulary, along with spatial relations observed among them.Our approach is conducted with the quantitative measurement in developing the car detector at every stage. The theory underneath the optimal solution is the maximal mutual information carried out by the system. Our goal is to keep the maximal mutual information transmitted from stage to stage so that only the least uncertainty about the class identification remains based on the observation of classifier's output.

  2. Sparse matrix transform for fast projection to reduced dimension

    SciTech Connect

    Theiler, James P; Cao, Guangzhi; Bouman, Charles A

    2010-01-01

    We investigate three algorithms that use the sparse matrix transform (SMT) to produce variance-maximizing linear projections to a lower-dimensional space. The SMT expresses the projection as a sequence of Givens rotations and this enables computationally efficient implementation of the projection operator. The baseline algorithm uses the SMT to directly approximate the optimal solution that is given by principal components analysis (PCA). A variant of the baseline begins with a standard SMT solution, but prunes the sequence of Givens rotations to only include those that contribute to the variance maximization. Finally, a simpler and faster third algorithm is introduced; this also estimates the projection operator with a sequence of Givens rotations, but in this case, the rotations are chosen to optimize a criterion that more directly expresses the dimension reduction criterion.

  3. Blind estimation of channel parameters and source components for EEG signals: a sparse factorization approach.

    PubMed

    Li, Yuanqing; Cichocki, Andrzej; Amari, Shun-Ichi

    2006-03-01

    In this paper, we use a two-stage sparse factorization approach for blindly estimating the channel parameters and then estimating source components for electroencephalogram (EEG) signals. EEG signals are assumed to be linear mixtures of source components, artifacts, etc. Therefore, a raw EEG data matrix can be factored into the product of two matrices, one of which represents the mixing matrix and the other the source component matrix. Furthermore, the components are sparse in the time-frequency domain, i.e., the factorization is a sparse factorization in the time frequency domain. It is a challenging task to estimate the mixing matrix. Our extensive analysis and computational results, which were based on many sets of EEG data, not only provide firm evidences supporting the above assumption, but also prompt us to propose a new algorithm for estimating the mixing matrix. After the mixing matrix is estimated, the source components are estimated in the time frequency domain using a linear programming method. In an example of the potential applications of our approach, we analyzed the EEG data that was obtained from a modified Sternberg memory experiment. Two almost uncorrelated components obtained by applying the sparse factorization method were selected for phase synchronization analysis. Several interesting findings were obtained, especially that memory-related synchronization and desynchronization appear in the alpha band, and that the strength of alpha band synchronization is related to memory performance. PMID:16566469

  4. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    PubMed

    Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  5. Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

    PubMed Central

    Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  6. Bayesian inference for generalized linear mixed models with predictors subject to detection limits: an approach that leverages information from auxiliary variables.

    PubMed

    Yue, Yu Ryan; Wang, Xiao-Feng

    2016-05-10

    This paper is motivated from a retrospective study of the impact of vitamin D deficiency on the clinical outcomes for critically ill patients in multi-center critical care units. The primary predictors of interest, vitamin D2 and D3 levels, are censored at a known detection limit. Within the context of generalized linear mixed models, we investigate statistical methods to handle multiple censored predictors in the presence of auxiliary variables. A Bayesian joint modeling approach is proposed to fit the complex heterogeneous multi-center data, in which the data information is fully used to estimate parameters of interest. Efficient Monte Carlo Markov chain algorithms are specifically developed depending on the nature of the response. Simulation studies demonstrate the outperformance of the proposed Bayesian approach over other existing methods. An application to the data set from the vitamin D deficiency study is presented. Possible extensions of the method regarding the absence of auxiliary variables, semiparametric models, as well as the type of censoring are also discussed. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26643287

  7. Super-sparsely view-sampled cone-beam CT by incorporating prior data.

    PubMed

    Abbas, Sajid; Min, Jonghwan; Cho, Seungryong

    2013-01-01

    Computed tomography (CT) is widely used in medicine for diagnostics or for image-guided therapies, and is also popular in industrial applications for nondestructive testing. CT conventionally requires a large number of projections to produce volumetric images of a scanned object, because the conventional image reconstruction algorithm is based on filtered-backprojection. This requirement may result in relatively high radiation dose to the patients in medical CT unless the radiation dose at each view angle is reduced, and can cause expensive scanning time and efforts in industrial CT applications. Sparse- view CT may provide a viable option to address both issues including high radiation dose and expensive scanning efforts. However, image reconstruction from sparsely sampled data in CT is in general very challenging, and much efforts have been made to develop algorithms for such an image reconstruction problem. Image total-variation minimization algorithm inspired by compressive sensing theory has recently been developed, which exploits the sparseness of the image derivative magnitude and can reconstruct images from sparse-view data to a similar quality of the images conventionally reconstructed from many views. In successive CT scans, prior CT image of an object and its projection data may be readily available, and the current CT image may have not much difference from the prior image. Considering the sparseness of such a difference image between the successive scans, image reconstruction of the difference image may be achieved from very sparsely sampled data. In this work, we showed that one can further reduce the number of projections, resulting in a super-sparse scan, for a good quality image reconstruction with the aid of a prior data. Both numerical and experimental results are provided. PMID:23507853

  8. A Multilevel Algorithm for the Solution of Second Order Elliptic Differential Equations on Sparse Grids

    NASA Technical Reports Server (NTRS)

    Pflaum, Christoph

    1996-01-01

    A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.

  9. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  10. Identification of spatially-localized flow structures via sparse proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Dhingra, Neil; Jovanovic, Mihailo; Schmid, Peter

    2013-11-01

    Proper Orthogonal Decomposition (POD) has become a standard tool for identification of the most energetic flow structures in fluid flows. It relies on the maximization of a quadratic form subject to a quadratic equality constraint, which can be readily accomplished via a singular value decomposition. For spatially homogeneous (or nearly homogeneous) flows, the resulting flow structures are global (or have large support) in the spatial domain of interest. By augmenting the optimization problem with an additional penalty term that promotes sparsity in the physical space, we are able to obtain energetic flow structures that become increasingly localized as our emphasis on sparsity increases. The resulting optimization problem, formulated in terms of an augmented Lagrangian functional, is solved using the Alternating Direction Method of Multipliers followed by a postprocessing step. The sparse POD algorithm is applied to the linearized Navier-Stokes equations for a plane channel flow, and the emergence of spatially localized structures is observed for increasing penalty terms. This test case and the underlying optimization techniques build the foundation for further studies into the relevance and role of localized perturbations on the overall behavior of general shear flows.

  11. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  12. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  13. Color demosaicking via robust adaptive sparse representation

    NASA Astrophysics Data System (ADS)

    Huang, Lili; Xiao, Liang; Chen, Qinghua; Wang, Kai

    2015-09-01

    A single sensor camera can capture scenes by means of a color filter array. Each pixel samples only one of the three primary colors. We use a color demosaicking (CDM) technique to produce full color images and propose a robust adaptive sparse representation model for high quality CDM. The data fidelity term is characterized by l1 norm to suppress the heavy-tailed visual artifacts with an adaptively learned dictionary, while the regularization term is encouraged to seek sparsity by forcing sparse coding close to its nonlocal means to reduce coding errors. Based on the classical quadratic penalty function technique in optimization and an operator splitting method in convex analysis, we further present an effective iterative algorithm to solve the variational problem. The efficiency of the proposed method is demonstrated by experimental results with simulated and real camera data.

  14. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  15. Dynamic Stochastic Superresolution of sparsely observed turbulent systems

    SciTech Connect

    Branicki, M.; Majda, A.J.

    2013-05-15

    Real-time capture of the relevant features of the unresolved turbulent dynamics of complex natural systems from sparse noisy observations and imperfect models is a notoriously difficult problem. The resulting lack of observational resolution and statistical accuracy in estimating the important turbulent processes, which intermittently send significant energy to the large-scale fluctuations, hinders efficient parameterization and real-time prediction using discretized PDE models. This issue is particularly subtle and important when dealing with turbulent geophysical systems with an vast range of interacting spatio-temporal scales and rough energy spectra near the mesh scale of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by appropriately filtering sparse regular observations with the help of cheap stochastic exactly solvable models, one can derive stochastically ‘superresolved’ velocity fields and gain insight into the important characteristics of the unresolved dynamics, including the detection of the so-called black swans. The DSS algorithms operate in Fourier domain and exploit the fact that the coarse observation network aliases high-wavenumber information into the resolved waveband. It is shown that these cheap algorithms are robust and have significant skill on a test bed of turbulent solutions from realistic nonlinear turbulent spatially extended systems in the presence of a significant model error. In particular, the DSS algorithms are capable of successfully capturing time-localized extreme events in the unresolved modes, and they provide good and robust skill for recovery of the unresolved processes in terms of pattern correlation. Moreover, we show that DSS improves the skill for recovering the primary modes associated with the sparse observation mesh which is equally important in applications. The skill of the various DSS algorithms depends on the energy spectrum

  16. Notes on implementation of sparsely distributed memory

    NASA Technical Reports Server (NTRS)

    Keeler, J. D.; Denning, P. J.

    1986-01-01

    The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.

  17. Robust Fringe Projection Profilometry via Sparse Representation.

    PubMed

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment. PMID:26890867

  18. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  19. Learning joint intensity-depth sparse representations.

    PubMed

    Tosic, Ivana; Drewes, Sarah

    2014-05-01

    This paper presents a method for learning overcomplete dictionaries of atoms composed of two modalities that describe a 3D scene: 1) image intensity and 2) scene depth. We propose a novel joint basis pursuit (JBP) algorithm that finds related sparse features in two modalities using conic programming and we integrate it into a two-step dictionary learning algorithm. The JBP differs from related convex algorithms because it finds joint sparsity models with different atoms and different coefficient values for intensity and depth. This is crucial for recovering generative models where the same sparse underlying causes (3D features) give rise to different signals (intensity and depth). We give a bound for recovery error of sparse coefficients obtained by JBP, and show numerically that JBP is superior to the group lasso algorithm. When applied to the Middlebury depth-intensity database, our learning algorithm converges to a set of related features, such as pairs of depth and intensity edges or image textures and depth slants. Finally, we show that JBP outperforms state of the art methods on depth inpainting for time-of-flight and Microsoft Kinect 3D data. PMID:24723574

  20. Mean-field sparse optimal control

    PubMed Central

    Fornasier, Massimo; Piccoli, Benedetto; Rossi, Francesco

    2014-01-01

    We introduce the rigorous limit process connecting finite dimensional sparse optimal control problems with ODE constraints, modelling parsimonious interventions on the dynamics of a moving population divided into leaders and followers, to an infinite dimensional optimal control problem with a constraint given by a system of ODE for the leaders coupled with a PDE of Vlasov-type, governing the dynamics of the probability distribution of the followers. In the classical mean-field theory, one studies the behaviour of a large number of small individuals freely interacting with each other, by simplifying the effect of all the other individuals on any given individual by a single averaged effect. In this paper, we address instead the situation where the leaders are actually influenced also by an external policy maker, and we propagate its effect for the number N of followers going to infinity. The technical derivation of the sparse mean-field optimal control is realized by the simultaneous development of the mean-field limit of the equations governing the followers dynamics together with the Γ-limit of the finite dimensional sparse optimal control problems. PMID:25288818

  1. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  2. Imaging black holes with sparse modeling

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Akiyama, Kazunori; Tazaki, Fumie; Kuramochi, Kazuki; Ikeda, Shiro; Hada, Kazuhiro; Uemura, Makoto

    2016-03-01

    We introduce a new imaging method for radio interferometry based on sparse- modeling. The direct observables in radio interferometry are visibilities, which are Fourier transformation of an astronomical image on the sky-plane, and incomplete sampling of visibilities in the spatial frequency domain results in an under-determined problem, which has been usually solved with 0 filling to un-sampled grids. In this paper we propose to directly solve this under-determined problem using sparse modeling without 0 filling, which realizes super resolution, i.e., resolution higher than the standard refraction limit. We show simulation results of sparse modeling for the Event Horizon Telescope (EHT) observations of super-massive black holes and demonstrate that our approach has significant merit in observations of black hole shadows expected to be realized in near future. We also present some results with the method applied to real data, and also discuss more advanced techniques for practical observations such as imaging with closure phase as well as treating the effect of interstellar scattering effect.

  3. Framelet-Based Sparse Unmixing of Hyperspectral Images.

    PubMed

    Zhang, Guixu; Xu, Yingying; Fang, Faming

    2016-04-01

    Spectral unmixing aims at estimating the proportions (abundances) of pure spectrums (endmembers) in each mixed pixel of hyperspectral data. Recently, a semi-supervised approach, which takes the spectral library as prior knowledge, has been attracting much attention in unmixing. In this paper, we propose a new semi-supervised unmixing model, termed framelet-based sparse unmixing (FSU), which promotes the abundance sparsity in framelet domain and discriminates the approximation and detail components of hyperspectral data after framelet decomposition. Due to the advantages of the framelet representations, e.g., images have good sparse approximations in framelet domain, and most of the additive noises are included in the detail coefficients, the FSU model has a better antinoise capability, and accordingly leads to more desirable unmixing performance. The existence and uniqueness of the minimizer of the FSU model are then discussed, and the split Bregman algorithm and its convergence property are presented to obtain the minimal solution. Experimental results on both simulated data and real data demonstrate that the FSU model generally performs better than the compared methods. PMID:26849863

  4. Modeling human performance with low light sparse color imagers

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Reynolds, Joseph P.; Cha, Jae; Hodgkin, Van

    2011-05-01

    Reflective band sensors are often signal to noise limited in low light conditions. Any additional filtering to obtain spectral information further reduces the signal to noise, greatly affecting range performance. Modern sensors, such as the sparse color filter CCD, circumvent this additional degradation through reducing the number of pixels affected by filters and distributing the color information. As color sensors become more prevalent in the warfighter arsenal, the performance of the sensor-soldier system must be quantified. While field performance testing ultimately validates the success of a sensor, accurately modeling sensor performance greatly reduces the development time and cost, allowing the best technology to reach the soldier the fastest. Modeling of sensors requires accounting for how the signal is affected through the modulation transfer function (MTF) and noise of the system. For the modeling of these new sensors, the MTF and noise for each color band must be characterized, and the appropriate sampling and blur must be applied. We show how sparse array color filter sensors may be modeled and how a soldier's performance with such a sensor may be predicted. This general approach to modeling color sensors can be extended to incorporate all types of low light color sensors.

  5. Beyond Low Rank + Sparse: Multiscale Low Rank Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Ong, Frank; Lustig, Michael

    2016-06-01

    Low rank methods allow us to capture globally correlated components within matrices. The recent low rank + sparse decomposition further enables us to extract sparse entries along with the globally correlated components. In this paper, we present a natural generalization and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under an incoherence condition, the convex program recovers the multi-scale low rank components exactly. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information.

  6. Coronagraph-integrated wavefront sensing with a sparse aperture mask

    NASA Astrophysics Data System (ADS)

    Subedi, Hari; Zimmerman, Neil T.; Kasdin, N. Jeremy; Cavanagh, Kathleen; Riggs, A. J. Eldorado

    2015-07-01

    Stellar coronagraph performance is highly sensitive to optical aberrations. In order to effectively suppress starlight for exoplanet imaging applications, low-order wavefront aberrations entering a coronagraph, such as tip-tilt, defocus, and coma, must be determined and compensated. Previous authors have established the utility of pupil-plane masks (both nonredundant/sparse-aperture and generally asymmetric aperture masks) for wavefront sensing (WFS). Here, we show how a sparse aperture mask (SAM) can be integrated with a coronagraph to measure low-order differential phase aberrations. Starlight rejected by the coronagraph's focal plane stop is collimated to a relay pupil, where the mask forms an interference fringe pattern on a subsequent detector. Our numerical Fourier propagation models show that the information encoded in the fringe intensity distortions is sufficient to accurately discriminate and estimate Zernike phase modes extending from tip-tilt up to radial degree n=5, with amplitude up to λ/20 RMS. The SAM sensor can be integrated with both Lyot and shaped pupil coronagraphs at no detriment to the science beam quality. We characterize the reconstruction accuracy and the performance under low flux/short exposure time conditions, and place it in context of other coronagraph WFS schemes.

  7. Joint Low-Rank and Sparse Principal Feature Coding for Enhanced Robust Representation and Visual Classification.

    PubMed

    Zhang, Zhao; Li, Fanzhang; Zhao, Mingbo; Zhang, Li; Yan, Shuicheng

    2016-06-01

    Recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed. Technically, we first propose a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part. To well handle the outside data, we then present an inductive LSPFC (I-LSPFC). I-LSPFC incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization, so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation. To ensure that the learned features by I-LSPFC are optimal for classification, we further combine the classification error with the feature coding error to form a unified model, discriminative LSPFC (D-LSPFC), to boost performance. The model of D-LSPFC seamlessly integrates feature coding and discriminative classification, so the representation and classification powers can be enhanced. The proposed approaches are more general, and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases. Visual and numerical results demonstrate the effectiveness of our methods for representation and classification. PMID:27046875

  8. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms

    PubMed Central

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113

  9. Multitemporal Modelling of Socio-Economic Wildfire Drivers in Central Spain between the 1980s and the 2000s: Comparing Generalized Linear Models to Machine Learning Algorithms.

    PubMed

    Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar

    2016-01-01

    The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113

  10. A semiparametric negative binomial generalized linear model for modeling over-dispersed count data with a heavy tail: Characteristics and applications to crash data.

    PubMed

    Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy

    2016-06-01

    Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. PMID:26945472

  11. Estimating correlation by using a general linear mixed model: evaluation of the relationship between the concentration of HIV-1 RNA in blood and semen.

    PubMed

    Chakraborty, Hrishikesh; Helms, Ronald W; Sen, Pranab K; Cohen, Myron S

    2003-05-15

    Estimating the correlation coefficient between two outcome variables is one of the most important aspects of epidemiological and clinical research. A simple Pearson's correlation coefficient method is usually employed when there are complete independent data points for both outcome variables. However, researchers often deal with correlated observations in a longitudinal setting with missing values where a simple Pearson's correlation coefficient method cannot be used. General linear mixed models (GLMM) techniques were used to estimate correlation coefficients in a longitudinal data set with missing values. A random regression mixed model with unstructured covariance matrix was employed to estimate correlation coefficients between concentrations of HIV-1 RNA in blood and seminal plasma. The effects of CD4 count and antiretroviral therapy were also examined. We used data sets from three different centres (650 samples from 238 patients) where blood and seminal plasma HIV-1 RNA concentrations were collected from patients; 137 samples from 90 different patients without antiviral therapy and 513 samples from 148 patients receiving therapy were considered for analysis. We found no significant correlation between blood and semen HIV-1 RNA concentration in the absence of antiviral therapy. However, a moderate correlation between blood and semen HIV-1 RNA was observed among subjects with lower CD4 counts receiving therapy. Our findings confirm and extend the idea that the concentrations of HIV-1 in semen often differ from the HIV-1 concentration in blood. Antiretroviral therapy administered to subjects with low CD4 counts result in sufficient concomitant reduction of HIV-1 in blood and semen so as to improve the correlation between these compartments. These results have important implications for studies related to the sexual transmission of HIV, and development of HIV prevention strategies. PMID:12704609

  12. Efficient visual tracking via low-complexity sparse representation

    NASA Astrophysics Data System (ADS)

    Lu, Weizhi; Zhang, Jinglin; Kpalma, Kidiyo; Ronsin, Joseph

    2015-12-01

    Thanks to its good performance on object recognition, sparse representation has recently been widely studied in the area of visual object tracking. Up to now, little attention has been paid to the complexity of sparse representation, while most works are focused on the performance improvement. By reducing the computation load related to sparse representation hundreds of times, this paper proposes by far the most computationally efficient tracking approach based on sparse representation. The proposal simply consists of two stages of sparse representation, one is for object detection and the other for object validation. Experimentally, it achieves better performance than some state-of-the-art methods in both accuracy and speed.

  13. A sparse algorithm for the evaluation of the local energy in quantum Monte Carlo.

    PubMed

    Aspuru-Guzik, Alán; Salomón-Ferrer, Romelia; Austin, Brian; Lester, William A

    2005-05-01

    A new algorithm is presented for the sparse representation and evaluation of Slater determinants in the quantum Monte Carlo (QMC) method. The approach, combined with the use of localized orbitals in a Slater-type orbital basis set, significantly extends the size molecule that can be treated with the QMC method. Application of the algorithm to systems containing up to 390 electrons confirms that the cost of evaluating the Slater determinant scales linearly with system size. PMID:15761862

  14. Generalized subspace correction methods

    SciTech Connect

    Kolm, P.; Arbenz, P.; Gander, W.

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  15. KLU2 Direct Linear Solver Package

    Energy Science and Technology Software Center (ESTSC)

    2012-01-04

    KLU2 is a direct sparse solver for solving unsymmetric linear systems. It is related to the existing KLU solver, (in Amesos package and also as a stand-alone package from University of Florida) but provides template support for scalar and ordinal types. It uses a left looking LU factorization method.

  16. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, G K

    2000-05-01

    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively

  17. Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms

    NASA Astrophysics Data System (ADS)

    Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz

    2015-11-01

    We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.

  18. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    SciTech Connect

    Brown, Kristen A.; Harlim, John

    2013-02-15

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable.

  19. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    NASA Astrophysics Data System (ADS)

    Brown, Kristen A.; Harlim, John

    2013-02-01

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable.

  20. A survey of packages for large linear systems

    SciTech Connect

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user