Science.gov

Sample records for general sparse linear

  1. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    SciTech Connect

    Young, D.M.; Chen, J.Y.

    1994-12-31

    The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.

  2. Iterative solution of general sparse linear systems on clusters of workstations

    SciTech Connect

    Lo, Gen-Ching; Saad, Y.

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  3. Sparse linear programming subprogram

    SciTech Connect

    Hanson, R.J.; Hiebert, K.L.

    1981-12-01

    This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.

  4. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  5. Inpainting with sparse linear combinations of exemplars

    SciTech Connect

    Wohlberg, Brendt

    2008-01-01

    We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.

  6. Inpainting With Sparse Linear Combinations of Exemplars

    DTIC Science & Technology

    2010-05-01

    Alamos, NM 87545, USA ABSTRACT We introduce a new exemplar-based inpainting algorithm that represents the region to be inpainted as a sparse linear combi...exemplar-based methods. Initial performance comparisons on small inpaint - ing regions indicate that this method provides similar or better performance than...other recent methods. Index Terms— Image restoration, Inpainting , Exemplar 1. INTRODUCTION Exemplar based methods are becoming increasingly popular

  7. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  8. The efficient parallel iterative solution of large sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1992-06-01

    The development of efficient, general-purpose software for the iterative solution of sparse linear systems on a parallel MIMD computer requires an interesting combination of expertise. Parallel graph heuristics, convergence analysis, and basic linear algebra implementation issues must all be considered. In this paper, we discuss how we have incorporated recent results in these areas into a general-purpose iterative solver. First, we consider two recently developed parallel graph coloring heuristics. We show how the method proposed by Luby, based on determining maximal independent sets, can be modified to run in an asynchronous manner and give aa expected running time bound for this modified heuristic. In addition, a number of graph reduction heuristics are described that are used in our implementation to improve the individual processor performance. The effect of these various graph reductions on the solution of sparse triangular systems is categorized. Finally, we discuss the performance of this solver from the perspective of two large-scale applications: a piezoelectric crystal finite-element modeling problem, and a nonlinear optimization problem to determine the minimum energy configuration of a three-dimensional, layered superconductor model.

  9. Out-of-Core Solutions of Complex Sparse Linear Equations

    NASA Technical Reports Server (NTRS)

    Yip, E. L.

    1982-01-01

    ETCLIB is library of subroutines for obtaining out-of-core solutions of complex sparse linear equations. Routines apply to dense and sparse matrices too large to be stored in core. Useful for solving any set of linear equations, but particularly useful in cases where coefficient matrix has no special properties that guarantee convergence with any of interative processes. The only assumption made is that coefficient matrix is not singular.

  10. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  11. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  12. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    SciTech Connect

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-11-30

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  13. Generalized sparse regularization with application to fMRI brain decoding.

    PubMed

    Ng, Bernard; Abugharbieh, Rafeef

    2011-01-01

    Many current medical image analysis problems involve learning thousands or even millions of model parameters from extremely few samples. Employing sparse models provides an effective means for handling the curse of dimensionality, but other propitious properties beyond sparsity are typically not modeled. In this paper, we propose a simple approach, generalized sparse regularization (GSR), for incorporating domain-specific knowledge into a wide range of sparse linear models, such as the LASSO and group LASSO regression models. We demonstrate the power of GSR by building anatomically-informed sparse classifiers that additionally model the intrinsic spatiotemporal characteristics of brain activity for fMRI classification. We validate on real data and show how prior-informed sparse classifiers outperform standard classifiers, such as SVM and a number of sparse linear classifiers, both in terms of prediction accuracy and result interpretability. Our results illustrate the added-value in facilitating flexible integration of prior knowledge beyond sparsity in large-scale model learning problems.

  14. Scalable Library for the Parallel Solution of Sparse Linear Systems

    SciTech Connect

    Jones, Mark; Plassmann, Paul E.

    1993-07-14

    BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, each node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.

  15. A multi-level method for sparse linear systems

    SciTech Connect

    Shapira, Y.

    1997-09-01

    A multi-level method for the solution of sparse linear systems is introduced. The definition of the method is based on data from the coefficient matrix alone. An upper bound for the condition number is available for certain symmetric positive definite (SPD) problems. Numerical experiments confirm the analysis and illustrate the efficiency of the method for diffusion problems with discontinuous coefficients with discontinuities which are not aligned with the coarse meshes.

  16. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  17. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ (d) , and the dictionary is learned from the training data using the vector space structure of ℝ (d) and its Euclidean L(2)-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  18. Solution of large, sparse systems of linear equations in massively parallel applications

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1992-01-01

    We present a general-purpose parallel iterative solver for large, sparse systems of linear equations. This solver is used in two applications, a piezoelectric crystal vibration problem and a superconductor model, that could be solved only on the largest available massively parallel machine. Results obtained on the Intel DELTA show computational rates of up to 3.25 gigaflops for these applications.

  19. Predicting cognitive data from medical images using sparse linear regression.

    PubMed

    Kandel, Benjamin M; Wolk, David A; Gee, James C; Avants, Brian

    2013-01-01

    We present a new framework for predicting cognitive or other continuous-variable data from medical images. Current methods of probing the connection between medical images and other clinical data typically use voxel-based mass univariate approaches. These approaches do not take into account the multivariate, network-based interactions between the various areas of the brain and do not give readily interpretable metrics that describe how strongly cognitive function is related to neuroanatomical structure. On the other hand, high-dimensional machine learning techniques do not typically provide a direct method for discovering which parts of the brain are used for making predictions. We present a framework, based on recent work in sparse linear regression, that addresses both drawbacks of mass univariate approaches, while preserving the direct spatial interpretability that they provide. In addition, we present a novel optimization algorithm that adapts the conjugate gradient method for sparse regression on medical imaging data. This algorithm produces coefficients that are more interpretable than existing sparse regression techniques.

  20. Learning a Nonnegative Sparse Graph for Linear Regression.

    PubMed

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung

    2015-09-01

    Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.

  1. Sparse stochastic processes and discretization of linear inverse problems.

    PubMed

    Bostan, Emrah; Kamilov, Ulugbek S; Nilchian, Masih; Unser, Michael

    2013-07-01

    We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and l1-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.

  2. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  3. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  4. A linear recurrent kernel online learning algorithm with sparse updates.

    PubMed

    Fan, Haijin; Song, Qing

    2014-02-01

    In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Performance bounds for modal analysis using sparse linear arrays

    NASA Astrophysics Data System (ADS)

    Li, Yuanxin; Pezeshki, Ali; Scharf, Louis L.; Chi, Yuejie

    2017-05-01

    We study the performance of modal analysis using sparse linear arrays (SLAs) such as nested and co-prime arrays, in both first-order and second-order measurement models. We treat SLAs as constructed from a subset of sensors in a dense uniform linear array (ULA), and characterize the performance loss of SLAs with respect to the ULA due to using much fewer sensors. In particular, we claim that, provided the same aperture, in order to achieve comparable performance in terms of Cramér-Rao bound (CRB) for modal analysis, SLAs require more snapshots, of which the number is about the number of snapshots used by ULA times the compression ratio in the number of sensors. This is shown analytically for the case with one undamped mode, as well as empirically via extensive numerical experiments for more complex scenarios. Moreover, the misspecified CRB proposed by Richmond and Horowitz is also studied, where SLAs suffer more performance loss than their ULA counterpart.

  6. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2008-01-01

    We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  7. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  8. A linear geospatial streamflow modeling system for data sparse environments

    USGS Publications Warehouse

    Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James

    2008-01-01

    In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.

  9. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  10. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.

    PubMed

    Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.

  11. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation

    PubMed Central

    Grossi, Giuliano; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283

  12. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  13. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    PubMed Central

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  14. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  15. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  16. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  17. A linear programming approach for estimating the structure of a sparse linear genetic network from transcript profiling data

    PubMed Central

    Bhadra, Sahely; Bhattacharyya, Chiranjib; Chandra, Nagasuma R; Mian, I Saira

    2009-01-01

    Background A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In

  18. A novel method to design sparse linear arrays for ultrasonic phased array.

    PubMed

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  19. Feature Modeling in Underwater Environments Using Sparse Linear Combinations

    DTIC Science & Technology

    2010-01-01

    waters . Optics Express, 16(13), 2008. 2, 3 [9] J. Jaflfe. Monte carlo modeling of underwate-image forma- tion: Validity of the linear and small-angle... turbid water , etc), we would like to determine if these two images contain the same (or similar) object(s). One approach is as follows: 1. Detect...nearest neighbor methods on extracted feature descriptors This methodology works well for clean, out-of- water images, however, when imaging underwater

  20. Iterative solution of large, sparse linear systems on a static data flow architecture - Performance studies

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Patrick, M. L.

    1985-01-01

    The applicability of static data flow architectures to the iterative solution of sparse linear systems of equations is investigated. An analytic performance model of a static data flow computation is developed. This model includes both spatial parallelism, concurrent execution in multiple PE's, and pipelining, the streaming of data from array memories through the PE's. The performance model is used to analyze a row partitioned iterative algorithm for solving sparse linear systems of algebraic equations. Based on this analysis, design parameters for the static data flow architecture as a function of matrix sparsity and dimension are proposed.

  1. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization

    PubMed Central

    Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  2. Sparse non-negative generalized PCA with applications to metabolomics

    PubMed Central

    Allen, Genevera I.; Maletić-Savatić, Mirjana

    2011-01-01

    Motivation: Nuclear magnetic resonance (NMR) spectroscopy has been used to study mixtures of metabolites in biological samples. This technology produces a spectrum for each sample depicting the chemical shifts at which an unknown number of latent metabolites resonate. The interpretation of this data with common multivariate exploratory methods such as principal components analysis (PCA) is limited due to high-dimensionality, non-negativity of the underlying spectra and dependencies at adjacent chemical shifts. Results: We develop a novel modification of PCA that is appropriate for analysis of NMR data, entitled Sparse Non-Negative Generalized PCA. This method yields interpretable principal components and loading vectors that select important features and directly account for both the non-negativity of the underlying spectra and dependencies at adjacent chemical shifts. Through the reanalysis of experimental NMR data on five purified neural cell types, we demonstrate the utility of our methods for dimension reduction, pattern recognition, sample exploration and feature selection. Our methods lead to the identification of novel metabolites that reflect the differences between these cell types. Availability: www.stat.rice.edu/~gallen/software.html Contact: gallen@rice.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21930672

  3. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    SciTech Connect

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  4. The impact of improved sparse linear solvers on industrial engineering applications

    SciTech Connect

    Heroux, M.; Baddourah, M.; Poole, E.L.; Yang, Chao Wu

    1996-12-31

    There are usually many factors that ultimately determine the quality of computer simulation for engineering applications. Some of the most important are the quality of the analytical model and approximation scheme, the accuracy of the input data and the capability of the computing resources. However, in many engineering applications the characteristics of the sparse linear solver are the key factors in determining how complex a problem a given application code can solve. Therefore, the advent of a dramatically improved solver often brings with it dramatic improvements in our ability to do accurate and cost effective computer simulations. In this presentation we discuss the current status of sparse iterative and direct solvers in several key industrial CFD and structures codes, and show the impact that recent advances in linear solvers have made on both our ability to perform challenging simulations and the cost of those simulations. We also present some of the current challenges we have and the constraints we face in trying to improve these solvers. Finally, we discuss future requirements for sparse linear solvers on high performance architectures and try to indicate the opportunities that exist if we can develop even more improvements in linear solver capabilities.

  5. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  6. Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.

    PubMed

    Anzt, H; Quintana-Ortí, E S

    2014-06-28

    While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance.

  7. Solving Large Sparse Linear Systems in End-to-end Accelerator Structure Simulations

    SciTech Connect

    Lee, L

    2004-01-23

    This paper presents a case study of solving very large sparse linear systems in end-to-end accelerator structure simulations. Both direct solvers and iterative solvers are investigated. A parallel multilevel preconditioner based on hierarchical finite element basis functions is considered and has been implemented to accelerate the convergence of iterative solvers. A linear system with matrix size 93,147,736 and with 3,964,961,944 non-zeros from 3D electromagnetic finite element discretization has been solved in less than 8 minutes with 1024 CPUs on the NERSC IBM SP. The resource utilization as well as the application performance for these solvers is discussed.

  8. Evaluation of generalized degrees of freedom for sparse estimation by replica method

    NASA Astrophysics Data System (ADS)

    Sakata, A.

    2016-12-01

    We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.

  9. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  10. Inference of dense spectral reflectance images from sparse reflectance measurement using non-linear regression modeling

    NASA Astrophysics Data System (ADS)

    Deglint, Jason; Kazemzadeh, Farnoud; Wong, Alexander; Clausi, David A.

    2015-09-01

    One method to acquire multispectral images is to sequentially capture a series of images where each image contains information from a different bandwidth of light. Another method is to use a series of beamsplitters and dichroic filters to guide different bandwidths of light onto different cameras. However, these methods are very time consuming and expensive and perform poorly in dynamic scenes or when observing transient phenomena. An alternative strategy to capturing multispectral data is to infer this data using sparse spectral reflectance measurements captured using an imaging device with overlapping bandpass filters, such as a consumer digital camera using a Bayer filter pattern. Currently the only method of inferring dense reflectance spectra is the Wiener adaptive filter, which makes Gaussian assumptions about the data. However, these assumptions may not always hold true for all data. We propose a new technique to infer dense reflectance spectra from sparse spectral measurements through the use of a non-linear regression model. The non-linear regression model used in this technique is the random forest model, which is an ensemble of decision trees and trained via the spectral characterization of the optical imaging system and spectral data pair generation. This model is then evaluated by spectrally characterizing different patches on the Macbeth color chart, as well as by reconstructing inferred multispectral images. Results show that the proposed technique can produce inferred dense reflectance spectra that correlate well with the true dense reflectance spectra, which illustrates the merits of the technique.

  11. Grouping in sparse random-dot patterns: linear and nonlinear mechanisms

    NASA Astrophysics Data System (ADS)

    Kashi, Ramanujan S.; Papathomas, Thomas V.; Gorea, Andrei

    1997-06-01

    This study reports on experiments conducted with human observers to investigate the properties of linear and non- linear, perceptual grouping mechanisms by using reverse- polarity sparse random-dot patterns. The stimuli were generated by spatially superimposing a sparse set of randomly distributed square elements onto a copy of the original set that was expanded or rotated about the center of the screen. In the control experiment both the original and transformed sets contained elements of identical luminance contrast with the background. The main experiments involved a reverse- contrast random-dot pattern, in which the transformed set consisted of elements of equal contrast magnitude but opposite polarity to that of the original set. At least two competing global percepts are possible: 'forward grouping' in which perceived grouping agrees with the physical transformation; or 'reverse grouping' in a direction orthogonal to that of the 'forward grouping.' The two-alternative forced-choice (2AFC) task was to report the direction of the global grouping. For the control experiment, the observers reported forward grouping both at the fovea and eccentricities of up to 4 degrees; as expected, no reverse grouping was observed. With the reverse-polarity stimulus, reverse grouping was observed at high eccentricities and low contrasts, but forward grouping dominated under foveal viewing. In another experiment, the influence of chromatic mechanisms was studied by using opposite-contrast red elements on a yellow background. In this experiment reverse grouping was not observed, which indicates that color mechanisms veto reverse grouping. Reverse grouping can be hypothesized to be the result of processing by linear oriented spatial mechanisms, in analogy with reverse-phi motion. Forward grouping, on the other hand, can be explained by non-linear preprocessing (such s squaring or full-wave rectification).

  12. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  13. Generalized linear mixed models for meta-analysis.

    PubMed

    Platt, R W; Leroux, B G; Breslow, N

    1999-03-30

    We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.

  14. LANZ: Software solving the large sparse symmetric generalized eigenproblem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1990-01-01

    A package, LANZ, for solving the large symmetric generalized eigenproblem is described. The package was tested on four different architectures: Convex 200, CRAY Y-MP, Sun-3, and Sun-4. The package uses a Lanczos' method and is based on recent research into solving the generalized eigenproblem.

  15. Bagging linear sparse Bayesian learning models for variable selection in cancer diagnosis.

    PubMed

    Lu, Chuan; Devos, Andy; Suykens, Johan A K; Arús, Carles; Van Huffel, Sabine

    2007-05-01

    This paper investigates variable selection (VS) and classification for biomedical datasets with a small sample size and a very high input dimension. The sequential sparse Bayesian learning methods with linear bases are used as the basic VS algorithm. Selected variables are fed to the kernel-based probabilistic classifiers: Bayesian least squares support vector machines (BayLS-SVMs) and relevance vector machines (RVMs). We employ the bagging techniques for both VS and model building in order to improve the reliability of the selected variables and the predictive performance. This modeling strategy is applied to real-life medical classification problems, including two binary cancer diagnosis problems based on microarray data and a brain tumor multiclass classification problem using spectra acquired via magnetic resonance spectroscopy. The work is experimentally compared to other VS methods. It is shown that the use of bagging can improve the reliability and stability of both VS and model prediction.

  16. Application of the Cramer rule in the solution of sparse systems of linear algebraic equations

    NASA Astrophysics Data System (ADS)

    Mittal, R. C.; Al-Kurdi, Ahmad

    2001-11-01

    In this work, the solution of a sparse system of linear algebraic equations is obtained by using the Cramer rule. The determinants are computed with the help of the numerical structure approach defined in Suchkov (Graphs of Gearing Machines, Leningrad, Quebec, 1983) in which only the non-zero elements are used. Cramer rule produces the solution directly without creating fill-in problem encountered in other direct methods. Moreover, the solution can be expressed exactly if all the entries, including the right-hand side, are integers and if all products do not exceed the size of the largest integer that can be represented in the arithmetic of the computer used. The usefulness of Suchkov numerical structure approach is shown by applying on seven examples. Obtained results are also compared with digraph approach described in Mittal and Kurdi (J. Comput. Math., to appear). It is shown that the performance of the numerical structure approach is better than that of digraph approach.

  17. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure

    PubMed Central

    Li, Yanming; Zhu, Ji

    2015-01-01

    Summary We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functioning groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. PMID:25732839

  18. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  19. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  20. Semi-Parametric Generalized Linear Models.

    DTIC Science & Technology

    1985-08-01

    is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS

  1. LANZ - SOFTWARE FOR SOLVING THE LARGE SPARSE SYMMETRIC GENERALIZED EIGENPROBLEM

    NASA Technical Reports Server (NTRS)

    Jones, M. T.

    1994-01-01

    LANZ is a sophisticated algorithm based on the simple Lanczos method for solving the generalized eigenvalue problem. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the basic Lanczos algorithm. The program has been successfully used to solve problems such as: 1) finding the vibration frequencies and mode shape vectors of a structure, and 2) finding the smallest load at which a structure will buckle. Several methods exist for solving the large symmetric generalized eigenvalue problem. LANZ offers an alternative to the popular sub-space iteration approach. The program includes a new algorithm for solving the tri-diagonal matrices that arise when using the Lanczos method. Procedurally, LANZ starts with the user's initial shift, then executes the Lanczos algorithm until: 1) the desired number of eigenvalues is found; 2) no storage space is left; or 3) LANZ determines that a new shift is needed. When a new shift is needed, the program selects it based on accumulated information. At each iteration, LANZ examines the converged and unconverged eigenvalues along with the inertia counts to ensure that no eigenvalues have been missed. LANZ is written in FORTRAN 77 and C language. It was originally designed to run on computers that support vector processing such as the CRAY Y-MP and is therefore optimized for vector machines. Makefiles are included for the Sun3, Sun4, Cray Y-MP, and CONVEX 220. When implemented on a Sun4 computer, LANZ required 670K of main memory. The standard distribution medium for this program is a .25 inch streaming magnetic cartridge tape in Unix tar format. It is also available on a 3.5 inch diskette in UNIX tar format. LANZ was developed in 1989. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. Cray Y-MP is a trademark of Cray Research, Inc. CONVEX 220 is a trademark of Convex Computer Corporation.

  2. Progressive Magnetic Resonance Image Reconstruction Based on Iterative Solution of a Sparse Linear System

    PubMed Central

    Fahmy, Ahmed S.; Gabr, Refaat E.; Heberlein, Keith; Hu, Xiaoping P.

    2006-01-01

    Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility. PMID:23165034

  3. Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing.

    PubMed

    Fang, Jun; Zhang, Lizao; Li, Hohgbin

    2016-04-20

    We consider the problem of recovering twodimensional (2-D) block-sparse signals with unknown cluster patterns. Two-dimensional block-sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging. To exploit the underlying block-sparse structure, we propose a 2-D pattern-coupled hierarchical Gaussian prior model. The proposed pattern-coupled hierarchical Gaussian prior model imposes a soft coupling mechanism among neighboring coefficients through their shared hyperparameters. This coupling mechanism enables effective and automatic learning of the underlying irregular cluster patterns, without requiring any a priori knowledge of the block partition of sparse signals. We develop a computationally efficient Bayesian inference method which integrates the generalized approximate message passing (GAMP) technique with the proposed prior model. Simulation results show that the proposed method offers competitive recovery performance for a range of 2-D sparse signal recovery and image processing applications over existing method, meanwhile achieving a significant reduction in computational complexity.

  4. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  5. BlockSolve v1.1: Scalable library software for the parallel solution of sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1993-03-01

    BlockSolve is a software library for solving large, sparse systems of linear equations on massively parallel computers. The matrices must be symmetric, but may have an arbitrary sparsity structure. BlockSolve is a portable package that is compatible with several different message-passing pardigms. This report gives detailed instructions on the use of BlockSolve in applications programs.

  6. BlockSolve v1. 1: Scalable library software for the parallel solution of sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1993-03-01

    BlockSolve is a software library for solving large, sparse systems of linear equations on massively parallel computers. The matrices must be symmetric, but may have an arbitrary sparsity structure. BlockSolve is a portable package that is compatible with several different message-passing pardigms. This report gives detailed instructions on the use of BlockSolve in applications programs.

  7. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    SciTech Connect

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-ups that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.

  8. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  9. Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems

    NASA Astrophysics Data System (ADS)

    Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott

    2016-04-01

    High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.

  10. Extended Generalized Linear Latent and Mixed Model

    ERIC Educational Resources Information Center

    Segawa, Eisuke; Emery, Sherry; Curry, Susan J.

    2008-01-01

    The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…

  11. Linear and nonlinear generalized Fourier transforms.

    PubMed

    Pelloni, Beatrice

    2006-12-15

    This article presents an overview of a transform method for solving linear and integrable nonlinear partial differential equations. This new transform method, proposed by Fokas, yields a generalization and unification of various fundamental mathematical techniques and, in particular, it yields an extension of the Fourier transform method.

  12. Optimizing the electrodiagnostic accuracy in Guillain-Barré syndrome subtypes: Criteria sets and sparse linear discriminant analysis.

    PubMed

    Uncini, Antonino; Ippoliti, Luigi; Shahrizaila, Nortina; Sekiguchi, Yukari; Kuwabara, Satoshi

    2017-07-01

    To optimize the electrodiagnosis of Guillain-Barré syndrome (GBS) subtypes at first study. The reference electrodiagnosis was obtained in 53 demyelinating and 45 axonal GBS patients on the basis of two serial studies and results of anti-ganglioside antibodies assay. We retrospectively employed sparse linear discriminant analysis (LDA), two existing electrodiagnostic criteria sets (Hadden et al., 1998; Rajabally et al., 2015) and one we propose that additionally evaluates duration of motor responses, sural sparing pattern and defines reversible conduction failure (RCF) in motor and sensory nerves at second study. At first study the misclassification error rates, compared to reference diagnoses, were: 15.3% for sparse LDA, 30% for our criteria, 45% for Rajabally's and 48% for Hadden's. Sparse LDA identified seven most powerful electrophysiological variables differentiating demyelinating and axonal subtypes and assigned to each patient the diagnostic probability of belonging to either subtype. At second study 46.6% of axonal GBS patients showed RCF in two motor and 8.8% in two sensory nerves. Based on a single study, sparse LDA showed the highest diagnostic accuracy. RCF is present in a considerable percentage of axonal patients. Sparse LDA, a supervised statistical method of classification, should be introduced in the electrodiagnostic practice. Copyright © 2017. Published by Elsevier B.V.

  13. PCG reference manual: A package for the iterative solution of large sparse linear systems on parallel computers. Version 1.0

    SciTech Connect

    Joubert, W.D.; Carey, G.F.; Kohli, H.; Lorber, A.; McLay, R.T.; Shen, Y.; Berner, N.A. |; Kalhan, A. |

    1995-01-01

    PCG (Preconditioned Conjugate Gradient package) is a system for solving linear equations of the form Au = b, for A a given matrix and b and u vectors. PCG, employing various gradient-type iterative methods coupled with preconditioners, is designed for general linear systems, with emphasis on sparse systems such as these arising from discretization of partial differential equations arising from physical applications. It can be used to solve linear equations efficiently on parallel computer architectures. Much of the code is reusable across architectures and the package is portable across different systems; the machines that are currently supported is listed. This manual is intended to be the general-purpose reference describing all features of the package accessible to the user; suggestions are also given regarding which methods to use for a given problem.

  14. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  15. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F.; Neese, Frank

    2015-07-01

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  16. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    SciTech Connect

    Pinski, Peter; Riplinger, Christoph; Neese, Frank E-mail: frank.neese@cec.mpg.de; Valeev, Edward F. E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  17. Interpretable exemplar-based shape classification using constrained sparse linear models

    NASA Astrophysics Data System (ADS)

    Sigurdsson, Gunnar A.; Yang, Zhen; Tran, Trac D.; Prince, Jerry L.

    2015-03-01

    Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.

  18. Identifying Keystone Species in the Human Gut Microbiome from Metagenomic Timeseries Using Sparse Linear Regression

    PubMed Central

    Fisher, Charles K.; Mehta, Pankaj

    2014-01-01

    Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is now possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the ecological interactions between species directly from sequence data. Any algorithm for inferring ecological interactions must overcome three major obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions due to a statistical problem called “errors-in-variables”. Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct “keystone species”, Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human

  19. Identifying keystone species in the human gut microbiome from metagenomic timeseries using sparse linear regression.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2014-01-01

    Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is now possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the ecological interactions between species directly from sequence data. Any algorithm for inferring ecological interactions must overcome three major obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions due to a statistical problem called "errors-in-variables". Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct "keystone species", Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human gut

  20. Optimal Tests of Treatment Effects for the Overall Population and Two Subpopulations in Randomized Trials, using Sparse Linear Programming.

    PubMed

    Rosenblum, Michael; Liu, Han; Yen, En-Hsu

    2014-01-01

    We propose new, optimal methods for analyzing randomized trials, when it is suspected that treatment effects may differ in two predefined subpopulations. Such subpopulations could be defined by a biomarker or risk factor measured at baseline. The goal is to simultaneously learn which subpopulations benefit from an experimental treatment, while providing strong control of the familywise Type I error rate. We formalize this as a multiple testing problem and show it is computationally infeasible to solve using existing techniques. Our solution involves a novel approach, in which we first transform the original multiple testing problem into a large, sparse linear program. We then solve this problem using advanced optimization techniques. This general method can solve a variety of multiple testing problems and decision theory problems related to optimal trial design, for which no solution was previously available. In particular, we construct new multiple testing procedures that satisfy minimax and Bayes optimality criteria. For a given optimality criterion, our new approach yields the optimal tradeoff between power to detect an effect in the overall population versus power to detect effects in subpopulations. We demonstrate our approach in examples motivated by two randomized trials of new treatments for HIV.

  1. Alternative approach to general coupled linear optics

    SciTech Connect

    Wolski, Andrzej

    2005-11-29

    The Twiss parameters provide a convenient description of beam optics in uncoupled linear beamlines. For coupled beamlines, a variety of approaches are possible for describing the linear optics; here, we propose an approach and notation that naturally generalizes the familiar Twiss parameters to the coupled case in three degrees of freedom. Our approach is based on an eigensystem analysis of the matrix of second-order beam moments, or alternatively (in the case of a storage ring) on an eigensystem analysis of the linear single-turn map. The lattice functions that emerge from this approach have an interpretation that is conceptually very simple: in particular, the lattice functions directly relate the beam distribution in phase space to the invariant emittances. To emphasize the physical significance of the coupled lattice functions, we develop the theory from first principles, using only the assumption of linear symplectic transport. We also give some examples of the application of this approach, demonstrating its advantages of conceptual and notational simplicity.

  2. [General practice--linear thinking and complexity].

    PubMed

    Stalder, H

    2006-09-27

    As physicians, we apply and teach linear thinking. This approach permits to dissect the patient's problem to the molecular level and has contributed enormously to the knowledge and progress of medicine. The linear approach is particularly useful in medical education, in quantitative research and helps to resolve simple problems. However, it risks to be rigid. Living beings (such as patients and physicians!) have to be considered as complex systems. A complex system cannot be dissected into its parts without losing its identity. It is dependent on its past and interactions with the outside are often followed by unpredictable reactions. The patient-centred approach in medicine permits the physician, a complex system himself, to integrate the patient's system and to adapt to his reality. It is particularly useful in general medicine.

  3. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  4. Gravitational Wave in Linear General Relativity

    NASA Astrophysics Data System (ADS)

    Cubillos, D. J.

    2017-07-01

    General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.

  5. Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors

    NASA Astrophysics Data System (ADS)

    Singh, Vimal; Tewfik, Ahmed H.

    2011-03-01

    Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error <= (0.23 +/- 0.04)mm and <= (0.30 +/- 0.07)mm for outer & inner surfaces respectively.

  6. Blended General Linear Methods based on Generalized BDF

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Magherini, Cecilia

    2008-09-01

    General Linear Methods were introduced in order to encompass a large family of numerical methods for the solution of ODE-IVPs, ranging from LMF to RK formulae. In so doing, it is possible to obtain methods able to overcome typical drawbacks of the previous classes of methods. For example, stability limitations of LMF and order reduction for RK methods. Nevertheless, these goals are usually achieved at the price of a higher computational cost. Consequently, many efforts have been done in order to derive GLMs with particular features, to be exploited for their efficient implementation. In recent years, the derivation of GLMs from particular Boundary Value Methods (BVMs), namely the family of Generalized BDF (GBDF), has been proposed for the numerical solution of stiff ODE-IVPs. Here, this approach is further developed in order to derive GLMs combining good stability and accuracy properties with the possibility of efficiently solving the generated discrete problems via the blended implementation of the methods.

  7. Sparse-view X-ray CT Reconstruction via Total Generalized Variation Regularization

    PubMed Central

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as “PWLS-TGV” for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150

  8. Generalized Hierarchical Sparse Model for Arbitrary-Order Interactive Antigenic Sites Identification in Flu Virus Data

    PubMed Central

    Han, Lei; Zhang, Yu; Wan, Xiu-Feng; Zhang, Tong

    2016-01-01

    Recent statistical evidence has shown that a regression model by incorporating the interactions among the original covariates/features can significantly improve the interpretability for biological data. One major challenge is the exponentially expanded feature space when adding high-order feature interactions to the model. To tackle the huge dimensionality, hierarchical sparse models (HSM) are developed by enforcing sparsity under heredity structures in the interactions among the covariates. However, existing methods only consider pairwise interactions, making the discovery of important high-order interactions a non-trivial open problem. In this paper, we propose a generalized hierarchical sparse model (GHSM) as a generalization of the HSM models to tackle arbitrary-order interactions. The GHSM applies the ℓ1 penalty to all the model coefficients under a constraint that given any covariate, if none of its associated kth-order interactions contribute to the regression model, then neither do its associated higher-order interactions. The resulting objective function is non-convex with a challenge lying in the coupled variables appearing in the arbitrary-order hierarchical constraints and we devise an efficient optimization algorithm to directly solve it. Specifically, we decouple the variables in the constraints via both the general iterative shrinkage and thresholding (GIST) and the alternating direction method of multipliers (ADMM) methods into three subproblems, each of which is proved to admit an efficiently analytical solution. We evaluate the GHSM method in both synthetic problem and the antigenic sites identification problem for the influenza virus data, where we expand the feature space up to the 5th-order interactions. Empirical results demonstrate the effectiveness and efficiency of the proposed methods and the learned high-order interactions have meaningful synergistic covariate patterns in the influenza virus antigenicity. PMID:28392970

  9. A new method for spatial resolution enhancement of hyperspectral images using sparse coding and linear spectral unmixing

    NASA Astrophysics Data System (ADS)

    Hashemi, Nezhad Z.; Karami, A.

    2015-10-01

    Hyperspectral images (HSI) have high spectral and low spatial resolutions. However, multispectral images (MSI) usually have low spectral and high spatial resolutions. In various applications HSI with high spectral and spatial resolutions are required. In this paper, a new method for spatial resolution enhancement of HSI using high resolution MSI based on sparse coding and linear spectral unmixing (SCLSU) is introduced. In the proposed method (SCLSU), high spectral resolution features of HSI and high spatial resolution features of MSI are fused. In this case, the sparse representation of some high resolution MSI and linear spectral unmixing (LSU) model of HSI and MSI is simultaneously used in order to construct high resolution HSI (HRHSI). The fusion process of HSI and MSI is formulated as an ill-posed inverse problem. It is solved by the Split Augmented Lagrangian Shrinkage Algorithm (SALSA) and an orthogonal matching pursuit (OMP) algorithm. Finally, the proposed algorithm is applied to the Hyperion and ALI datasets. Compared with the other state-of-the-art algorithms such as Coupled Nonnegative Matrix Factorization (CNMF) and local spectral unmixing, the SCLSU has significantly increased the spatial resolution and in addition the spectral content of HSI is well maintained.

  10. Generalization of spectral fidelity with flexible measures for the sparse representation classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhu, Yong; Huang, Xin; Li, Jiayi

    2016-10-01

    Sparse representation classification (SRC) is becoming a promising tool for hyperspectral image (HSI) classification, where the Euclidean spectral distance (ESD) is widely used to reflect the fidelity between the original and reconstructed signals. In this paper, a generalized model is proposed to extend SRC by characterizing the spectral fidelity with flexible similarity measures. To validate the flexibility, several typical similarity measures-the spectral angle similarity (SAS), spectral information divergence (SID), the structural similarity index measure (SSIM), and the ESD-are included in the generalized model. Furthermore, a general solution based on a gradient descent technique is used to solve the nonlinear optimization problem formulated by the flexible similarity measures. To test the generalized model, two actual HSIs were used, and the experimental results confirm the ability of the proposed model to accommodate the various spectral similarity measures. Performance comparisons with the ESD, SAS, SID, and SSIM criteria were also conducted, and the results consistently show the advantages of the generalized model for HSI classification in terms of overall accuracy and kappa coefficient.

  11. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  12. Sparse generalized volterra model of human hippocampal spike train transformation for memory prostheses.

    PubMed

    Song, Dong; Robinson, Brian S; Hampson, Robert E; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W

    2015-01-01

    In order to build hippocampal prostheses for restoring memory functions, we build multi-input, multi-output (MIMO) nonlinear dynamical models of the human hippocampus. Spike trains are recorded from the hippocampal CA3 and CA1 regions of epileptic patients performing a memory-dependent delayed match-to-sample task. Using CA3 and CA1 spike trains as inputs and outputs respectively, second-order sparse generalized Laguerre-Volterra models are estimated with group lasso and local coordinate descent methods to capture the nonlinear dynamics underlying the spike train transformations. These models can accurately predict the CA1 spike trains based on the ongoing CA3 spike trains and thus will serve as the computational basis of the hippocampal memory prosthesis.

  13. BlockSolve95 users manual: Scalable library software for the parallel solution of sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1995-12-01

    BlockSolve95 is a software library for solving large, sparse systems of linear equations on massively parallel computers or networks of workstations. The matrices must be symmetric in structure; however, the matrix nonzero values may be either symmetric or nonsymmetric. The nonzeros must be real valued. BlockSolve95 uses a message-passing paradigm and achieves portability through the use of the MPI message-passing standard. Emphasis has been placed on achieving both good professor performance through the use of higher-level BLAS and scalability through the use of advanced algorithms. This report gives detailed instructions on the use of BlockSolve95 and descriptions of a number of program examples that can be used as templates for application programs.

  14. Sparse generalized pencil of function and its application to system identification and structural health monitoring

    NASA Astrophysics Data System (ADS)

    Mohammadi-Ghazi, Reza; Büyüköztürk, Oral

    2016-04-01

    Singularity expansion method (SEM) is a system identification approach with applications in solving inverse scattering problems, electromagnetic interaction problems, remote sensing, and radars. In this approach, the response of a system is represented in terms of its complex poles; therefore, this method not only extracts the fundamental frequencies of the system from the signal, but also provides sufficient information about system's damping if its transient response is analyzed. There are various techniques in SEM among which the generalized pencil-of-function (GPOF) is the computationally most stable and the least sensitive one to noise. However, SEM methods, including GPOF, suffer from imposition of spurious poles on the expansion of signals due to the lack of apriori information about the number of true poles. In this study we address this problem by proposing sparse generalized pencil-of-function (SGPOF). The proposed method excludes the spurious poles through sparsity-based regularization with ℓ1-norm. This study is backed by numerical examples as well as an application example which employs the proposed technique for structural health monitoring (SHM) and compares the results with other signal processing methods.

  15. Approximate Orthogonal Sparse Embedding for Dimensionality Reduction.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Yang, Jian; Zhang, David

    2016-04-01

    Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.

  16. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  17. Off-grid direction of arrival estimation based on joint spatial sparsity for distributed sparse linear arrays.

    PubMed

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-11-20

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach.

  18. A Performance Comparison of the Parallel Preconditioners for Iterative Methods for Large Sparse Linear Systems Arising from Partial Differential Equations on Structured Grids

    NASA Astrophysics Data System (ADS)

    Ma, Sangback

    In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The

  19. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  20. Collective synchronization as a method of learning and generalization from sparse data

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Tsutsui, Takako

    2008-02-01

    We propose a method for extracting general features from multivariate data using a network of phase oscillators subject to an analogue of the Kuramoto model for collective synchronization. In this method, the natural frequencies of the oscillators are extended to vector quantities to which multivariate data are assigned. The common frequency vectors of the groups of partially synchronized oscillators are interpreted to be the template vectors representing the general features of the data set. We show that the proposed method becomes equivalent to the self-organizing map algorithm devised by Kohonen when the governing equations are linearized about their solutions of partial synchronization. As a case study to test the utility of our method, we applied it to care-needs-certification data in the Japanese public long-term care insurance program, and found major general patterns in the health status of the elderly needing nursing care.

  1. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  2. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  3. Centering, Scale Indeterminacy, and Differential Item Functioning Detection in Hierarchical Generalized Linear and Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Cheong, Yuk Fai; Kamata, Akihito

    2013-01-01

    In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…

  4. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra

    NASA Astrophysics Data System (ADS)

    Hine, N. D. M.; Haynes, P. D.; Mostofi, A. A.; Payne, M. C.

    2010-09-01

    We present calculations of formation energies of defects in an ionic solid (Al2O3) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  5. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  6. Analysis, tuning and comparison of two general sparse solvers for distributed memory computers

    SciTech Connect

    Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.; Li, X.S.

    2000-06-30

    We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differences in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.

  7. Linear stability of general magnetically insulated electron flow

    NASA Astrophysics Data System (ADS)

    Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.

    1984-03-01

    A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.

  8. The General Linear Model and Direct Standardization: A Comparison.

    ERIC Educational Resources Information Center

    Little, Roderick J. A.; Pullum, Thomas W.

    1979-01-01

    Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)

  9. From linear to generalized linear mixed models: A case study in repeated measures

    USDA-ARS?s Scientific Manuscript database

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  10. Linear Programming Solutions of Generalized Linear Impulsive Correction for Geostationary Stationkeeping

    NASA Astrophysics Data System (ADS)

    Park, Jae Woo

    1996-06-01

    The generalized linear impulsive correction problem is applied to make a linear programming problem for optimizing trajectory of an orbiting spacecraft. Numerical application for the stationkeeping maneuver problem of geostationary satellite shows that this problem can efficiently find the optimal solution of the stationkeeping parameters, such as velocity changes, and the points of impulse by using the revised simplex method.

  11. Generalized perceptual linear prediction features for animal vocalization analysis.

    PubMed

    Clemins, Patrick J; Johnson, Michael T

    2006-07-01

    A new feature extraction model, generalized perceptual linear prediction (gPLP), is developed to calculate a set of perceptually relevant features for digital signal analysis of animal vocalizations. The gPLP model is a generalized adaptation of the perceptual linear prediction model, popular in human speech processing, which incorporates perceptual information such as frequency warping and equal loudness normalization into the feature extraction process. Since such perceptual information is available for a number of animal species, this new approach integrates that information into a generalized model to extract perceptually relevant features for a particular species. To illustrate, qualitative and quantitative comparisons are made between the species-specific model, generalized perceptual linear prediction (gPLP), and the original PLP model using a set of vocalizations collected from captive African elephants (Loxodonta africana) and wild beluga whales (Delphinapterus leucas). The models that incorporate perceptional information outperform the original human-based models in both visualization and classification tasks.

  12. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  13. A Bayesian approach for inducing sparsity in generalized linear models with multi-category response

    PubMed Central

    2015-01-01

    Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345

  14. A general non-linear multilevel structural equation mixture model

    PubMed Central

    Kelava, Augustin; Brandt, Holger

    2014-01-01

    In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022

  15. Linear equations in general purpose codes for stiff ODEs

    SciTech Connect

    Shampine, L. F.

    1980-02-01

    It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)

  16. Generalized Linear Multi-Frequency Imaging in VLBI

    NASA Astrophysics Data System (ADS)

    Likhachev, S.; Ladygin, V.; Guirin, I.

    2004-07-01

    In VLBI, generalized Linear Multi-Frequency Imaging (MFI) consists of multi-frequency synthesis (MFS) and multi-frequency analysis (MFA) of the VLBI data obtained from observations on various frequencies. A set of linear deconvolution MFI algorithms is described. The algorithms make it possible to obtain high quality images interpolated on any given frequency inside any given bandwidth, and to derive reliable estimates of spectral indexes for radio sources with continuum spectrum.

  17. A Matrix Approach for General Higher Order Linear Recurrences

    DTIC Science & Technology

    2011-01-01

    properties of linear recurrences (such as the well-known Fibonacci and Pell sequences ). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence . Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...6], the author gave the generalized order-k Fibonacci and Pell (F-P) sequence as follows: For m ≥ 0, n > 0 and 1 ≤ i ≤ k uin = 2 muin−1 + u i n−2

  18. Optimal explicit strong-stability-preserving general linear methods.

    SciTech Connect

    Constantinescu, E.; Sandu, A.

    2010-07-01

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  19. Solution of generalized shifted linear systems with complex symmetric matrices

    NASA Astrophysics Data System (ADS)

    Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo

    2012-07-01

    We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green's function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1-9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126-140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.

  20. Beam envelope calculations in general linear coupled lattices

    SciTech Connect

    Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.

    2015-01-15

    The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.

  1. Hierarchical Generalized Linear Models for the Analysis of Judge Ratings

    ERIC Educational Resources Information Center

    Muckle, Timothy J.; Karabatsos, George

    2009-01-01

    It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…

  2. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  3. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  4. Generalized TV and sparse decomposition of the ultrasound image deconvolution model based on fusion technology.

    PubMed

    Wen, Qiaonong; Wan, Suiren

    2013-01-01

    Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.

  5. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  6. A Generalized Linear Model for Estimating Spectrotemporal Receptive Fields from Responses to Natural Sounds

    PubMed Central

    Calabrese, Ana; Schumacher, Joseph W.; Schneider, David M.; Paninski, Liam; Woolley, Sarah M. N.

    2011-01-01

    In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons. PMID:21264310

  7. A general theory of linear cosmological perturbations: bimetric theories

    NASA Astrophysics Data System (ADS)

    Lagos, Macarena; Ferreira, Pedro G.

    2017-01-01

    We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.

  8. Electromagnetic axial anomaly in a generalized linear sigma model

    NASA Astrophysics Data System (ADS)

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  9. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  10. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    SciTech Connect

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank E-mail: evaleev@vt.edu; Valeev, Edward F. E-mail: evaleev@vt.edu

    2016-01-14

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  11. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous

  12. Sparse maps--A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory.

    PubMed

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F; Neese, Frank

    2016-01-14

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  13. Residuals analysis of the generalized linear models for longitudinal data.

    PubMed

    Chang, Y C

    2000-05-30

    The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.

  14. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  15. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  16. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver

    2015-06-01

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.

  17. Extracting Embedded Generalized Networks from Linear Programming Problems.

    DTIC Science & Technology

    1984-09-01

    E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION

  18. Generalization of continuous-variable quantum cloning with linear optics

    SciTech Connect

    Zhai Zehui; Guo Juan; Gao Jiangrui

    2006-05-15

    We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.

  19. Generalized space and linear momentum operators in quantum mechanics

    SciTech Connect

    Costa, Bruno G. da

    2014-06-15

    We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.

  20. General quantum constraints on detector noise in continuous linear measurements

    NASA Astrophysics Data System (ADS)

    Miao, Haixing

    2017-01-01

    In quantum sensing and metrology, an important class of measurement is the continuous linear measurement, in which the detector is coupled to the system of interest linearly and continuously in time. One key aspect involved is the quantum noise of the detector, arising from quantum fluctuations in the detector input and output. It determines how fast we acquire information about the system and also influences the system evolution in terms of measurement backaction. We therefore often categorize it as the so-called imprecision noise and quantum backaction noise. There is a general Heisenberg-like uncertainty relation that constrains the magnitude of and the correlation between these two types of quantum noise. The main result of this paper is to show that, when the detector becomes ideal, i.e., at the quantum limit with minimum uncertainty, not only does the uncertainty relation takes the equal sign as expected, but also there are two new equalities. This general result is illustrated by using the typical cavity QED setup with the system being either a qubit or a mechanical oscillator. Particularly, the dispersive readout of a qubit state, and the measurement of mechanical motional sideband asymmetry are considered.

  1. Generalized linear mixed model for segregation distortion analysis.

    PubMed

    Zhan, Haimao; Xu, Shizhong

    2011-11-11

    Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.

  2. Generalized linear mixed model for segregation distortion analysis

    PubMed Central

    2011-01-01

    Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575

  3. A new family of gauges in linearized general relativity

    NASA Astrophysics Data System (ADS)

    Esposito, Giampiero; Stornaiolo, Cosimo

    2000-05-01

    For vacuum Maxwell theory in four dimensions, a supplementary condition exists (due to Eastwood and Singer) which is invariant under conformal rescalings of the metric, in agreement with the conformal symmetry of the Maxwell equations. Thus, starting from the de Donder gauge, which is not conformally invariant but is the gravitational counterpart of the Lorenz gauge, one can consider, led by formal analogy, a new family of gauges in general relativity, which involve fifth-order covariant derivatives of metric perturbations. The admissibility of such gauges in the classical theory is first proven in the cases of linearized theory about flat Euclidean space or flat Minkowski spacetime. In the former, the general solution of the equation for the fulfillment of the gauge condition after infinitesimal diffeomorphisms involves a 3-harmonic 1-form and an inverse Fourier transform. In the latter, one needs instead the kernel of powers of the wave operator, and a contour integral. The analysis is also used to put restrictions on the dimensionless parameter occurring in the DeWitt supermetric, while the proof of admissibility is generalized to a suitable class of curved Riemannian backgrounds. Eventually, a non-local construction of the tensor field is obtained which makes it possible to achieve conformal invariance of the above gauges.

  4. On homogeneous second order linear general quantum difference equations.

    PubMed

    Faried, Nashat; Shehata, Enas M; El Zafarani, Rasha M

    2017-01-01

    In this paper, we prove the existence and uniqueness of solutions of the β-Cauchy problem of second order β-difference equations [Formula: see text] [Formula: see text], in a neighborhood of the unique fixed point [Formula: see text] of the strictly increasing continuous function β, defined on an interval [Formula: see text]. These equations are based on the general quantum difference operator [Formula: see text], which is defined by [Formula: see text], [Formula: see text]. We also construct a fundamental set of solutions for the second order linear homogeneous β-difference equations when the coefficients are constants and study the different cases of the roots of their characteristic equations. Finally, we drive the Euler-Cauchy β-difference equation.

  5. Optimization in generalized linear models: A case study

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina

    2016-06-01

    The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.

  6. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  7. Modeling local item dependence with the hierarchical generalized linear model.

    PubMed

    Jiao, Hong; Wang, Shudong; Kamata, Akihito

    2005-01-01

    Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.

  8. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  9. Local structure preserving sparse coding for infrared target recognition

    PubMed Central

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824

  10. Local structure preserving sparse coding for infrared target recognition.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions.

  11. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  12. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  13. The increase in animal mortality risk following exposure to sparsely ionizing radiation is not linear quadratic with dose

    DOE PAGES

    Haley, Benjamin M.; Paunesku, Tatjana; Grdina, David J.; ...

    2015-12-09

    The US government regulates allowable radiation exposures relying, in large part, on the seventh report from the committee to estimate the Biological Effect of Ionizing Radiation (BEIR VII), which estimated that most contemporary exposures- protracted or low-dose, carry 1.5 fold less risk of carcinogenesis and mortality per Gy than acute exposures of atomic bomb survivors. This correction is known as the dose and dose rate effectiveness factor for the life span study of atomic bomb survivors (DDREFLSS). As a result, it was calculated by applying a linear-quadratic dose response model to data from Japanese atomic bomb survivors and a limitedmore » number of animal studies.« less

  14. The increase in animal mortality risk following exposure to sparsely ionizing radiation is not linear quadratic with dose

    SciTech Connect

    Haley, Benjamin M.; Paunesku, Tatjana; Grdina, David J.; Woloschak, Gayle E.; Aravindan, Natarajan

    2015-12-09

    The US government regulates allowable radiation exposures relying, in large part, on the seventh report from the committee to estimate the Biological Effect of Ionizing Radiation (BEIR VII), which estimated that most contemporary exposures- protracted or low-dose, carry 1.5 fold less risk of carcinogenesis and mortality per Gy than acute exposures of atomic bomb survivors. This correction is known as the dose and dose rate effectiveness factor for the life span study of atomic bomb survivors (DDREFLSS). As a result, it was calculated by applying a linear-quadratic dose response model to data from Japanese atomic bomb survivors and a limited number of animal studies.

  15. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  16. Process Setting through General Linear Model and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Senjuntichai, Angsumalin

    2010-10-01

    The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.

  17. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  18. Generalized linear model for estimation of missing daily rainfall data

    NASA Astrophysics Data System (ADS)

    Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed

    2017-04-01

    The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.

  19. Elastic-net regularization versus ℓ 1-regularization for linear inverse problems with quasi-sparse solutions

    NASA Astrophysics Data System (ADS)

    Chen, De-Han; Hofmann, Bernd; Zou, Jun

    2017-01-01

    We consider the ill-posed operator equation Ax  =  y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .

  20. Sparse linear modeling of next-generation mRNA sequencing (RNA-Seq) data for isoform discovery and abundance estimation.

    PubMed

    Li, Jingyi Jessica; Jiang, Ci-Ren; Brown, James B; Huang, Haiyan; Bickel, Peter J

    2011-12-13

    Since the inception of next-generation mRNA sequencing (RNA-Seq) technology, various attempts have been made to utilize RNA-Seq data in assembling full-length mRNA isoforms de novo and estimating abundance of isoforms. However, for genes with more than a few exons, the problem tends to be challenging and often involves identifiability issues in statistical modeling. We have developed a statistical method called "sparse linear modeling of RNA-Seq data for isoform discovery and abundance estimation" (SLIDE) that takes exon boundaries and RNA-Seq data as input to discern the set of mRNA isoforms that are most likely to present in an RNA-Seq sample. SLIDE is based on a linear model with a design matrix that models the sampling probability of RNA-Seq reads from different mRNA isoforms. To tackle the model unidentifiability issue, SLIDE uses a modified Lasso procedure for parameter estimation. Compared with deterministic isoform assembly algorithms (e.g., Cufflinks), SLIDE considers the stochastic aspects of RNA-Seq reads in exons from different isoforms and thus has increased power in detecting more novel isoforms. Another advantage of SLIDE is its flexibility of incorporating other transcriptomic data such as RACE, CAGE, and EST into its model to further increase isoform discovery accuracy. SLIDE can also work downstream of other RNA-Seq assembly algorithms to integrate newly discovered genes and exons. Besides isoform discovery, SLIDE sequentially uses the same linear model to estimate the abundance of discovered isoforms. Simulation and real data studies show that SLIDE performs as well as or better than major competitors in both isoform discovery and abundance estimation. The SLIDE software package is available at https://sites.google.com/site/jingyijli/SLIDE.zip.

  1. Sparse Matrix Software Catalog, Sparse Matrix Symposium 1982, Fairfield Glade, Tennessee, October 24-27, 1982,

    DTIC Science & Technology

    1982-10-27

    sparse matrices as well as other areas. Contents 1. operations on Sparse Matrices .. . . . . . . . . . . . . . . . . . . . . . . . I 1.1 Multi...22 2.1.1 Nonsymmetric systems ............................................. 22 2.1.1.1 General sparse matrices ...46 2.1.2.1 General sparse matrices ......................................... 46 2.1.2.2 Band or profile forms

  2. A general protocol to afford enantioenriched linear homoprenylic amines.

    PubMed

    Bosque, Irene; Foubelo, Francisco; Gonzalez-Gomez, Jose C

    2013-11-21

    The reaction of a readily obtained chiral branched homoprenylamonium salt with a range of aldehydes, including aliphatic substrates, affords the corresponding linear isomers in good yields and enantioselectivities.

  3. Graphical tools for model selection in generalized linear models.

    PubMed

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process.

  4. A general approach to mixed effects modeling of residual variances in generalized linear mixed models

    PubMed Central

    Kizilkaya, Kadir; Tempelman, Robert J

    2005-01-01

    We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567

  5. Connections between Generalizing and Justifying: Students' Reasoning with Linear Relationships

    ERIC Educational Resources Information Center

    Ellis, Amy B.

    2007-01-01

    Research investigating algebra students' abilities to generalize and justify suggests that they experience difficulty in creating and using appropriate generalizations and proofs. Although the field has documented students' errors, less is known about what students do understand to be general and convincing. This study examines the ways in which…

  6. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  7. On the Feasibility of a Generalized Linear Program

    DTIC Science & Technology

    1989-03-01

    generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,

  8. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  9. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  10. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  11. A Heuristic Ceiling Point Algorithm for General Integer Linear Programming

    DTIC Science & Technology

    1988-11-01

    narrowly satisfies the il h constraint: taking a unit step from x toward the ilh constraining hyperplane in a direction parallel to some coordinate...Business, Stanford Univesity , Stanford, Calif., December 1964. Hillier, F., "Efficient Heuristic Procedures for Integer Linear Programming with an Inte- rior

  12. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  13. Development of a predictive model for lead, cadmium and fluorine soil-water partition coefficients using sparse multiple linear regression analysis.

    PubMed

    Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi

    2017-11-01

    In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. IV. Linear-scaling second-order explicitly correlated energy with pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Pavošević, Fabijan; Pinski, Peter; Riplinger, Christoph; Neese, Frank; Valeev, Edward F.

    2016-04-01

    We present a formulation of the explicitly correlated second-order Møller-Plesset (MP2-F12) energy in which all nontrivial post-mean-field steps are formulated with linear computational complexity in system size. The two key ideas are the use of pair-natural orbitals for compact representation of wave function amplitudes and the use of domain approximation to impose the block sparsity. This development utilizes the concepts for sparse representation of tensors described in the context of the domain based local pair-natural orbital-MP2 (DLPNO-MP2) method by us recently [Pinski et al., J. Chem. Phys. 143, 034108 (2015)]. Novel developments reported here include the use of domains not only for the projected atomic orbitals, but also for the complementary auxiliary basis set (CABS) used to approximate the three- and four-electron integrals of the F12 theory, and a simplification of the standard B intermediate of the F12 theory that avoids computation of four-index two-electron integrals that involve two CABS indices. For quasi-1-dimensional systems (n-alkanes), the O (" separators="N ) DLPNO-MP2-F12 method becomes less expensive than the conventional O (" separators="N5 ) MP2-F12 for n between 10 and 15, for double- and triple-zeta basis sets; for the largest alkane, C200H402, in def2-TZVP basis, the observed computational complexity is N˜1.6, largely due to the cubic cost of computing the mean-field operators. The method reproduces the canonical MP2-F12 energy with high precision: 99.9% of the canonical correlation energy is recovered with the default truncation parameters. Although its cost is significantly higher than that of DLPNO-MP2 method, the cost increase is compensated by the great reduction of the basis set error due to explicit correlation.

  15. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  16. Analysis and Regulation of Nonlinear and Generalized Linear Systems.

    DTIC Science & Technology

    1985-09-06

    But this intuition is based on a linearized analysis, and may well be too conservative -or even totally inappropiate - for a particular (global...in the field of stochastic estimation. Given a time series, it is often possible to compute sufficient statistics of the associated process...and dynamically updating sufficient statistics with finite resources had received almost no attention in the literature, and turns out to be

  17. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  18. Local Sparse Structure Denoising for Low-Light-Level Image.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2015-12-01

    Sparse and redundant representations perform well in image denoising. However, sparsity-based methods fail to denoise low-light-level (LLL) images because of heavy and complex noise. They consider sparsity on image patches independently and tend to lose the texture structures. To suppress noises and maintain textures simultaneously, it is necessary to embed noise invariant features into the sparse decomposition process. We, therefore, used a local structure preserving sparse coding (LSPSc) formulation to explore the local sparse structures (both the sparsity and local structure) in image. It was found that, with the introduction of spatial local structure constraint into the general sparse coding algorithm, LSPSc could improve the robustness of sparse representation for patches in serious noise. We further used a kernel LSPSc (K-LSPSc) formulation, which extends LSPSc into the kernel space to weaken the influence of linear structure constraint in nonlinear data. Based on the robust LSPSc and K-LSPSc algorithms, we constructed a local sparse structure denoising (LSSD) model for LLL images, which was demonstrated to give high performance in the natural LLL images denoising, indicating that both the LSPSc- and K-LSPSc-based LSSD models have the stable property of noise inhibition and texture details preservation.

  19. Generalized linear IgA dermatosis with palmar involvement.

    PubMed

    Norris, Ivy N; Haeberle, M Tye; Callen, Jeffrey P; Malone, Janine C

    2015-09-17

    Linear IgA bullous dermatosis (LABD) is a sub-epidermal blistering disorder characterized by deposition of IgA along the basement membrane zone (BMZ) as detected by immunofluorescence microscopy. The diagnosis is made by clinicopathologic correlation with immunofluorescence confirmation. Differentiation from other bullous dermatoses is important because therapeutic measures differ. Prompt initiation of the appropriate therapies can have a major impact on outcomes. We present three cases with prominent palmar involvement to alert the clinician of this potential physical exam finding and to consider LABD in the right context.

  20. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models.

    PubMed

    Hobbs, Brian P; Sargent, Daniel J; Carlin, Bradley P

    2012-08-28

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model.

  1. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  2. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  3. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    SciTech Connect

    Yavari, M.

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  4. Using Parallel Banded Linear System Solvers in Generalized Eigenvalue Problems

    DTIC Science & Technology

    1993-09-01

    systems. The PPT algorithm is similar to an algorithm introduced by Lawrie and Sameh in [18]. The PDD algorithm is a variant of PPT which uses the fa-t...AND L. JOHNSSON, Solving banded systems on a parallel processor, Parallel Comput., 5 (1987), pp. 219-246. [10] J. J. DONGARRA AND A. SAMEH , On some...symmetric generalized matrix eigenvalur problem, SIAM J. Matrix Anal. Appl., 14 (1993). [18] D. H. LAWRIE AND A. H. SAMEH , The computation and

  5. Linear Transformations, Projection Operators and Generalized Inverses; A Geometric Approach

    DTIC Science & Technology

    1988-03-01

    all direct complements of a and k respectively. Proof. From the representation (2.6) G m T P GMO = Tm a.1 Then A Tm Pa.1 A A Tm A =A, using (1.13) T P...closed range on Hibert spaces. ’p he V 5 0 •0 • -S. 19 " 6. REFERENCES 1. Langenhop, C. E. (1967). On generalized inverse of matrices. Siam J Appl . Math

  6. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  7. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  8. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  9. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  10. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  11. Computer analysis of general linear networks using digraphs.

    NASA Technical Reports Server (NTRS)

    Mcclenahan, J. O.; Chan, S.-P.

    1972-01-01

    Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.

  12. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  13. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  14. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  15. Impact of the implementation of MPI point-to-point communications on the performance of two general sparse solvers

    SciTech Connect

    Amestoy, Patrick R.; Duff, Iain S.; L'Excellent, Jean-Yves; Li, Xiaoye S.

    2001-10-10

    We examine the mechanics of the send and receive mechanism of MPI and in particular how we can implement message passing in a robust way so that our performance is not significantly affected by changes to the MPI system. This leads us to using the Isend/Irecv protocol which will entail sometimes significant algorithmic changes. We discuss this within the context of two different algorithms for sparse Gaussian elimination that we have parallelized. One is a multifrontal solver called MUMPS, the other is a supernodal solver called SuperLU. Both algorithms are difficult to parallelize on distributed memory machines. Our initial strategies were based on simple MPI point-to-point communication primitives. With such approaches, the parallel performance of both codes are very sensitive to the MPI implementation, the way MPI internal buffers are used in particular. We then modified our codes to use more sophisticated nonblocking versions of MPI communication. This significantly improved the performance robustness (independent of the MPI buffering mechanism) and scalability, but at the cost of increased code complexity.

  16. Computational techniques and data structures of the sparse underdetermined systems with using graph theory

    NASA Astrophysics Data System (ADS)

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2016-12-01

    For constructing of the solutions of the sparse linear systems we propose effective methods, technologies and their implementation in Wolfram Mathematica. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure. In addition, such systems arise in estimating traffic in the generalized graph or multigraph on its unobservable part. For computing of each vector of the basis solution space with linear estimate in the worst case we propose effective algorithms and data structures in the case when a support of the multigraph or graph for the sparse systems contains a cycles.

  17. A note on rank reduction in sparse multivariate regression.

    PubMed

    Chen, Kun; Chan, Kung-Sik

    A reduced-rank regression with sparse singular value decomposition (RSSVD) approach was proposed by Chen et al. for conducting variable selection in a reduced-rank model. To jointly model the multivariate response, the method efficiently constructs a prespecified number of latent variables as some sparse linear combinations of the predictors. Here, we generalize the method to also perform rank reduction, and enable its usage in reduced-rank vector autoregressive (VAR) modeling to perform automatic rank determination and order selection. We show that in the context of stationary time-series data, the generalized approach correctly identifies both the model rank and the sparse dependence structure between the multivariate response and the predictors, with probability one asymptotically. We demonstrate the efficacy of the proposed method by simulations and analyzing a macro-economical multivariate time series using a reduced-rank VAR model.

  18. Comparison of Real-Time and Linear-Response Time-Dependent Density Functional Theories for Molecular Chromophores Ranging from Sparse to High Densities of States.

    PubMed

    Tussupbayev, Samat; Govind, Niranjan; Lopata, Kenneth; Cramer, Christopher J

    2015-03-10

    We assess the performance of real-time time-dependent density functional theory (RT-TDDFT) for the calculation of absorption spectra of 12 organic dye molecules relevant to photovoltaics and dye-sensitized solar cells with 8 exchange-correlation functionals (3 traditional, 3 global hybrids, and 2 range-separated hybrids). We compare the calculations with traditional linear-response (LR) TDDFT and experimental spectra. In addition, we demonstrate the efficacy of the RT-TDDFT approach to calculate wide absorption spectra of two large chromophores relevant to photovoltaics and molecular switches. RT-TDDFT generally requires longer simulation times, compared to LR-TDDFT, for absorption spectra of small systems. However, it becomes more effective for the calculation of wide absorption spectra of large molecular complexes and systems with very high densities of states.

  19. Quasi-periodic solutions for quasi-linear generalized KdV equations

    NASA Astrophysics Data System (ADS)

    Giuliani, Filippo

    2017-05-01

    We prove the existence of Cantor families of small amplitude, linearly stable, quasi-periodic solutions of quasi-linear autonomous Hamiltonian generalized KdV equations. We consider the most general quasi-linear quadratic nonlinearity. The proof is based on an iterative Nash-Moser algorithm. To initialize this scheme, we need to perform a bifurcation analysis taking into account the strongly perturbative effects of the nonlinearity near the origin. In particular, we implement a weak version of the Birkhoff normal form method. The inversion of the linearized operators at each step of the iteration is achieved by pseudo-differential techniques, linear Birkhoff normal form algorithms and a linear KAM reducibility scheme.

  20. The Generalized Logit-Linear Item Response Model for Binary-Designed Items

    ERIC Educational Resources Information Center

    Revuelta, Javier

    2008-01-01

    This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…

  1. Generalized linear porokeratosis: a rare entity with excellent response to acitretin.

    PubMed

    Garg, Taru; Ramchander; Varghese, Bincy; Barara, Meenu; Nangia, Anita

    2011-05-15

    Linear porokeratosis is a rare disorder of keratinization that usually presents at birth. We report a 17-year-old male with generalized linear porokeratosis, a very rare variant of porokeratosis, with extensive involvement of the trunk and extremities along with nail and genital involvement. The patient was treated with oral acitretin with excellent clinical response.

  2. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-04-03

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study.

  3. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  4. Generalized linear mixed models can detect unimodal species-environment relationships.

    PubMed

    Jamil, Tahira; Ter Braak, Cajo J F

    2013-01-01

    Niche theory predicts that species occurrence and abundance show non-linear, unimodal relationships with respect to environmental gradients. Unimodal models, such as the Gaussian (logistic) model, are however more difficult to fit to data than linear ones, particularly in a multi-species context in ordination, with trait modulated response and when species phylogeny and species traits must be taken into account. Adding squared terms to a linear model is a possibility but gives uninterpretable parameters. This paper explains why and when generalized linear mixed models, even without squared terms, can effectively analyse unimodal data and also presents a graphical tool and statistical test to test for unimodal response while fitting just the generalized linear mixed model. The R-code for this is supplied in Supplemental Information 1.

  5. Maladaptive behavioral consequences of conditioned fear-generalization: a pronounced, yet sparsely studied, feature of anxiety pathology.

    PubMed

    van Meurs, Brian; Wiggert, Nicole; Wicker, Isaac; Lissek, Shmuel

    2014-06-01

    Fear-conditioning experiments in the anxiety disorders focus almost exclusively on passive-emotional, Pavlovian conditioning, rather than active-behavioral, instrumental conditioning. Paradigms eliciting both types of conditioning are needed to study maladaptive, instrumental behaviors resulting from Pavlovian abnormalities found in clinical anxiety. One such Pavlovian abnormality is generalization of fear from a conditioned danger-cue (CS+) to resembling stimuli. Though lab-based findings repeatedly link overgeneralized Pavlovian-fear to clinical anxiety, no study assesses the degree to which Pavlovian overgeneralization corresponds with maladaptive, overgeneralized instrumental-avoidance. The current effort fills this gap by validating a novel fear-potentiated startle paradigm including Pavlovian and instrumental components. The paradigm is embedded in a computer game during which shapes appear on the screen. One shape paired with electric-shock serves as CS+, and other resembling shapes, presented in the absence of shock, serve as generalization stimuli (GSs). During the game, participants choose whether to behaviorally avoid shock at the cost of poorer performance. Avoidance during CS+ is considered adaptive because shock is a real possibility. By contrast, avoidance during GSs is considered maladaptive because shock is not a realistic prospect and thus unnecessarily compromises performance. Results indicate significant Pavlovian-instrumental relations, with greater generalization of Pavlovian fear associated with overgeneralization of maladaptive instrumental-avoidance.

  6. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  7. Group Sparse Additive Models

    PubMed Central

    Yin, Junming; Chen, Xi; Xing, Eric P.

    2016-01-01

    We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.

  8. On the Bohl and general exponents of the discrete time-varying linear system

    NASA Astrophysics Data System (ADS)

    Niezabitowski, Michał

    2014-12-01

    Many properties of dynamical systems may be characterized by certain numbers called characteristic exponents. The most important are: Lyapunov, Bohl and general exponents. In this paper we investigate relations between certain subtypes of the general exponents of discrete time-varying linear systems, namely the senior lower and the junior upper once. The main contribution of the paper is to construct an example of a system with the senior lower exponent strictly smaller than the junior upper general exponents.

  9. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  10. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  11. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  12. Optimal explicit strong-stability-preserving general linear methods : complete results.

    SciTech Connect

    Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.

    2009-03-03

    This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.

  13. Efficient, sparse biological network determination

    PubMed Central

    August, Elias; Papachristodoulou, Antonis

    2009-01-01

    Background Determining the interaction topology of biological systems is a topic that currently attracts significant research interest. Typical models for such systems take the form of differential equations that involve polynomial and rational functions. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data much harder. The use of linear dynamics and linearization techniques that have been proposed in the past can circumvent this, but the general problem of developing efficient algorithms for models that provide more accurate system descriptions remains open. Results We present a network determination algorithm that can treat model descriptions with polynomial and rational functions and which does not make use of linearization. For this purpose, we make use of the observation that biochemical networks are in general 'sparse' and minimize the 1-norm of the decision variables (sum of weighted network connections) while constraints keep the error between data and the network dynamics small. The emphasis of our methodology is on determining the interconnection topology rather than the specific reaction constants and it takes into account the necessary properties that a chemical reaction network should have – something that techniques based on linearization can not. The problem can be formulated as a Linear Program, a convex optimization problem, for which efficient algorithms are available that can treat large data sets efficiently and uncertainties in data or model parameters. Conclusion The presented methodology is able to predict with accuracy and efficiency the connectivity structure of a chemical reaction network with mass action kinetics and of a gene regulatory network from simulation data even if the dynamics of these systems are non-polynomial (rational) and uncertainties in the data are taken into account. It also produces a network structure that can explain the real experimental

  14. A generalized concordance correlation coefficient based on the variance components generalized linear mixed models for overdispersed count data.

    PubMed

    Carrasco, Josep L

    2010-09-01

    The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size. © 2009, The International Biometric Society.

  15. Linear and nonlinear associations between general intelligence and personality in Project TALENT.

    PubMed

    Major, Jason T; Johnson, Wendy; Deary, Ian J

    2014-04-01

    Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations.

  16. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  17. Flexible sparse regularization

    NASA Astrophysics Data System (ADS)

    Lorenz, Dirk A.; Resmerita, Elena

    2017-01-01

    The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.

  18. Prediction of formability for non-linear deformation history using generalized forming limit concept (GFLC)

    NASA Astrophysics Data System (ADS)

    Volk, Wolfram; Suh, Joungsik

    2013-12-01

    The prediction of formability is one of the most important tasks in sheet metal process simulation. The common criterion in industrial applications is the Forming Limit Curve (FLC). The big advantage of FLCs is the easy interpretation of simulation or measurement data in combination with an ISO standard for the experimental determination. However, the conventional FLCs are limited to almost linear and unbroken strain paths, i.e. deformation histories with non-linear strain increments often lead to big differences in comparison to the prediction of the FLC. In this paper a phenomenological approach, the so-called Generalized Forming Limit Concept (GFLC), is introduced to predict the localized necking on arbitrary deformation history with unlimited number of non-linear strain increments. The GFLC consists of the conventional FLC and an acceptable number of experiments with bi-linear deformation history. With the idea of the new defined "Principle of Equivalent Pre-Forming" every deformation state built up of two linear strain increments can be transformed to a pure linear strain path with the same used formability of the material. In advance this procedure can be repeated as often as necessary. Therefore, it allows a robust and cost effective analysis of beginning instability in Finite Element Analysis (FEA) for arbitrary deformation histories. In addition, the GFLC is fully downwards compatible to the established FLC for pure linear strain paths.

  19. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  20. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  1. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  2. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  3. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  4. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  5. Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero

    2011-01-01

    This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…

  6. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  7. The linear stability of plane stagnation-point flow against general disturbances

    NASA Astrophysics Data System (ADS)

    Brattkus, K.; Davis, S. H.

    1991-02-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  8. The linear stability of plane stagnation-point flow against general disturbances

    NASA Technical Reports Server (NTRS)

    Brattkus, K.; Davis, S. H.

    1991-01-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  9. The linear stability of plane stagnation-point flow against general disturbances

    NASA Technical Reports Server (NTRS)

    Brattkus, K.; Davis, S. H.

    1991-01-01

    The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.

  10. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    NASA Technical Reports Server (NTRS)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  11. Estimate of influenza cases using generalized linear, additive and mixed models.

    PubMed

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2015-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.

  12. Estimate of influenza cases using generalized linear, additive and mixed models

    PubMed Central

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2014-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010–2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0–27.5) in the winter months (December, January, February) and 3.38 (range 0–12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models. PMID:25483550

  13. Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models

    PubMed Central

    Yi, Nengjun; Ma, Shuangge

    2013-01-01

    Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052

  14. Conditional Akaike information under generalized linear and proportional hazards mixed models

    PubMed Central

    Donohue, M. C.; Overholser, R.; Xu, R.; Vaida, F.

    2011-01-01

    We study model selection for clustered data, when the focus is on cluster specific inference. Such data are often modelled using random effects, and conditional Akaike information was proposed in Vaida & Blanchard (2005) and used to derive an information criterion under linear mixed models. Here we extend the approach to generalized linear and proportional hazards mixed models. Outside the normal linear mixed models, exact calculations are not available and we resort to asymptotic approximations. In the presence of nuisance parameters, a profile conditional Akaike information is proposed. Bootstrap methods are considered for their potential advantage in finite samples. Simulations show that the performance of the bootstrap and the analytic criteria are comparable, with bootstrap demonstrating some advantages for larger cluster sizes. The proposed criteria are applied to two cancer datasets to select models when the cluster-specific inference is of interest. PMID:22822261

  15. Empirical Bayes Estimation of Coefficients in the General Linear Model from Data of Deficient Rank.

    ERIC Educational Resources Information Center

    Braun, Henry I.; And Others

    1983-01-01

    Empirical Bayes methods are shown to provide a practical alternative to standard least squares methods in fitting high dimensional models to sparse data. An example concerning prediction bias in educational testing is presented as an illustration. (Author)

  16. Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models.

    PubMed

    Xie, Minge; Simpson, Douglas G; Carroll, Raymond J

    2008-01-01

    This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety.

  17. Comparison of real-time and linear-response time-dependent density functional theories for molecular chromophores ranging from sparse to high densities of states

    SciTech Connect

    Tussupbayev, Samat; Govind, Niranjan; Lopata, Kenneth A.; Cramer, Christopher J.

    2015-03-10

    We assess the performance of real-time time-dependent density functional theory (RT-TDDFT) for the calculation of absorption spectra of 12 organic dye molecules relevant to photovoltaics and dye sensitized solar cells with 8 exchange-correlation functionals (3 traditional, 3 global hybrids, and 2 range-separated hybrids). We compare the calculations with traditional linear-response (LR) TDDFT. In addition, we demonstrate the efficacy of the RT-TDDFT approach to calculate wide absorption spectra of two large chromophores relevant to photovoltaics and molecular switches.

  18. A review of linear response theory for general differentiable dynamical systems

    NASA Astrophysics Data System (ADS)

    Ruelle, David

    2009-04-01

    The classical theory of linear response applies to statistical mechanics close to equilibrium. Away from equilibrium, one may describe the microscopic time evolution by a general differentiable dynamical system, identify nonequilibrium steady states (NESS) and study how these vary under perturbations of the dynamics. Remarkably, it turns out that for uniformly hyperbolic dynamical systems (those satisfying the 'chaotic hypothesis'), the linear response away from equilibrium is very similar to the linear response close to equilibrium: the Kramers-Kronig dispersion relations hold, and the fluctuation-dispersion theorem survives in a modified form (which takes into account the oscillations around the 'attractor' corresponding to the NESS). If the chaotic hypothesis does not hold, two new phenomena may arise. The first is a violation of linear response in the sense that the NESS does not depend differentiably on parameters (but this nondifferentiability may be hard to see experimentally). The second phenomenon is a violation of the dispersion relations: the susceptibility has singularities in the upper half complex plane. These 'acausal' singularities are actually due to 'energy nonconservation': for a small periodic perturbation of the system, the amplitude of the linear response is arbitrarily large. This means that the NESS of the dynamical system under study is not 'inert' but can give energy to the outside world. An 'active' NESS of this sort is very different from an equilibrium state, and it would be interesting to see what happens for active states to the Gallavotti-Cohen fluctuation theorem.

  19. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Invariance of the generalized oscillator under a linear transformation of the related system of orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Borzov, V. V.; Damaskinsky, E. V.

    2017-02-01

    We consider the families of polynomials P = { P n ( x)} n=0 ∞ and Q = { Q n ( x)} n=0 ∞ orthogonal on the real line with respect to the respective probability measures μ and ν. We assume that { Q n ( x)} n=0 ∞ and { P n ( x)} n=0 ∞ are connected by linear relations. In the case k = 2, we describe all pairs (P,Q) for which the algebras A P and A Q of generalized oscillators generated by { Qn(x)} n=0 ∞ and { Pn(x)} n=0 ∞ coincide. We construct generalized oscillators corresponding to pairs (P,Q) for arbitrary k ≥ 1.

  1. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  2. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  3. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  4. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.

  5. Linear and non-linear heart rate metrics for the assessment of anaesthetists' workload during general anaesthesia.

    PubMed

    Martin, J; Schneider, F; Kowalewskij, A; Jordan, D; Hapfelmeier, A; Kochs, E F; Wagner, K J; Schulz, C M

    2016-12-01

    Excessive workload may impact the anaesthetists' ability to adequately process information during clinical practice in the operation room and may result in inaccurate situational awareness and performance. This exploratory study investigated heart rate (HR), linear and non-linear heart rate variability (HRV) metrics and subjective ratings scales for the assessment of workload associated with the anaesthesia stages induction, maintenance and emergence. HR and HRV metrics were calculated based on five min segments from each of the three anaesthesia stages. The area under the receiver operating characteristics curve (AUC) of the investigated metrics was calculated to assess their ability to discriminate between the stages of anaesthesia. Additionally, a multiparametric approach based on logistic regression models was performed to further evaluate whether linear or non-linear heart rate metrics are suitable for the assessment of workload. Mean HR and several linear and non-linear HRV metrics including subjective workload ratings differed significantly between stages of anaesthesia. Permutation Entropy (PeEn, AUC=0.828) and mean HR (AUC=0.826) discriminated best between the anaesthesia stages induction and maintenance. In the multiparametric approach using logistic regression models, the model based on non-linear heart rate metrics provided a higher AUC compared with the models based on linear metrics. In this exploratory study based on short ECG segment analysis, PeEn and HR seem to be promising to separate workload levels between different stages of anaesthesia. The multiparametric analysis of the regression models favours non-linear heart rate metrics over linear metrics. © The Author 2016. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Unified Einstein-Virasoro Master Equation in the General Non-Linear Sigma Model

    SciTech Connect

    Boer, J. de; Halpern, M.B.

    1996-06-05

    The Virasoro master equation (VME) describes the general affine-Virasoro construction $T=L^abJ_aJ_b+iD^a \\dif J_a$ in the operator algebra of the WZW model, where $L^ab$ is the inverse inertia tensor and $D^a $ is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field $L^ab$ to the background fields of the sigma model. For a particular solution $L_G^ab$, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of $G$-structures on manifolds with torsion.

  7. Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.

    PubMed Central

    Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K

    2000-01-01

    The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907

  8. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  9. Superpixel sparse representation for target detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dong, Chunhua; Naghedolfeizi, Masoud; Aberra, Dawit; Qiu, Hao; Zeng, Xiangyan

    2017-05-01

    Sparse Representation (SR) is an effective classification method. Given a set of data vectors, SR aims at finding the sparsest representation of each data vector among the linear combinations of the bases in a given dictionary. In order to further improve the classification performance, the joint SR that incorporates interpixel correlation information of neighborhoods has been proposed for image pixel classification. However, SR and joint SR demand significant amount of computational time and memory, especially when classifying a large number of pixels. To address this issue, we propose a superpixel sparse representation (SSR) algorithm for target detection in hyperspectral imagery. We firstly cluster hyperspectral pixels into nearly uniform hyperspectral superpixels using our proposed patch-based SLIC approach based on their spectral and spatial information. The sparse representations of these superpixels are then obtained by simultaneously decomposing superpixels over a given dictionary consisting of both target and background pixels. The class of a hyperspectral pixel is determined by a competition between its projections on target and background subdictionaries. One key advantage of the proposed superpixel representation algorithm with respect to pixelwise and joint sparse representation algorithms is that it reduces computational cost while still maintaining competitive classification performance. We demonstrate the effectiveness of the proposed SSR algorithm through experiments on target detection in the in-door and out-door scene data under daylight illumination as well as the remote sensing data. Experimental results show that SSR generally outperforms state of the art algorithms both quantitatively and qualitatively.

  10. Block sparse Cholesky algorithms on advanced uniprocessor computers

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1991-12-01

    As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

  11. Bayesian Variable Selection and Computation for Generalized Linear Models with Conjugate Priors.

    PubMed

    Chen, Ming-Hui; Huang, Lan; Ibrahim, Joseph G; Kim, Sungduk

    2008-07-01

    In this paper, we consider theoretical and computational connections between six popular methods for variable subset selection in generalized linear models (GLM's). Under the conjugate priors developed by Chen and Ibrahim (2003) for the generalized linear model, we obtain closed form analytic relationships between the Bayes factor (posterior model probability), the Conditional Predictive Ordinate (CPO), the L measure, the Deviance Information Criterion (DIC), the Aikiake Information Criterion (AIC), and the Bayesian Information Criterion (BIC) in the case of the linear model. Moreover, we examine computational relationships in the model space for these Bayesian methods for an arbitrary GLM under conjugate priors as well as examine the performance of the conjugate priors of Chen and Ibrahim (2003) in Bayesian variable selection. Specifically, we show that once Markov chain Monte Carlo (MCMC) samples are obtained from the full model, the four Bayesian criteria can be simultaneously computed for all possible subset models in the model space. We illustrate our new methodology with a simulation study and a real dataset.

  12. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  13. Stochastic convex sparse principal component analysis.

    PubMed

    Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu

    2016-12-01

    Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.

  14. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    PubMed

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  15. The general linear model and fMRI: does love last forever?

    PubMed

    Poline, Jean-Baptiste; Brett, Matthew

    2012-08-15

    In this review, we first set out the general linear model (GLM) for the non technical reader, as a tool able to do both linear regression and ANOVA within the same flexible framework. We present a short history of its development in the fMRI community, and describe some interesting examples of its early use. We offer a few warnings, as the GLM relies on assumptions that may not hold in all situations. We conclude with a few wishes for the future of fMRI analyses, with or without the GLM. The appendix develops some aspects of use of contrasts for testing for the more technical reader. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Use of generalized linear mixed models for network meta-analysis.

    PubMed

    Tu, Yu-Kang

    2014-10-01

    In the past decade, a new statistical method-network meta-analysis-has been developed to address limitations in traditional pairwise meta-analysis. Network meta-analysis incorporates all available evidence into a general statistical framework for comparisons of multiple treatments. Bayesian network meta-analysis, as proposed by Lu and Ades, also known as "mixed treatments comparisons," provides a flexible modeling framework to take into account complexity in the data structure. This article shows how to implement the Lu and Ades model in the frequentist generalized linear mixed model. Two examples are provided to demonstrate how centering the covariates for random effects estimation within each trial can yield correct estimation of random effects. Moreover, under the correct specification for random effects estimation, the dummy coding and contrast basic parameter coding schemes will yield the same results. It is straightforward to incorporate covariates, such as moderators and confounders, into the generalized linear mixed model to conduct meta-regression for multiple treatment comparisons. Moreover, this approach may be extended easily to other types of outcome variables, such as continuous, counts, and multinomial. © The Author(s) 2014.

  17. Assessing correlation of clustered mixed outcomes from a multivariate generalized linear mixed model.

    PubMed

    Chen, Hsiang-Chun; Wehrly, Thomas E

    2015-02-20

    The classic concordance correlation coefficient measures the agreement between two variables. In recent studies, concordance correlation coefficients have been generalized to deal with responses from a distribution from the exponential family using the univariate generalized linear mixed model. Multivariate data arise when responses on the same unit are measured repeatedly by several methods. The relationship among these responses is often of interest. In clustered mixed data, the correlation could be present between repeated measurements either within the same observer or between different methods on the same subjects. Indices for measuring such association are needed. This study proposes a series of indices, namely, intra-correlation, inter-correlation, and total correlation coefficients to measure the correlation under various circumstances in a multivariate generalized linear model, especially for joint modeling of clustered count and continuous outcomes. The proposed indices are natural extensions of the concordance correlation coefficient. We demonstrate the methodology with simulation studies. A case example of osteoarthritis study is provided to illustrate the use of these proposed indices. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  19. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  20. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  1. Capelli bitableaux and Z-forms of general linear Lie superalgebras.

    PubMed Central

    Brini, A; Teolis, A G

    1990-01-01

    The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048

  2. Linear and nonlinear quantification of respiratory sinus arrhythmia during propofol general anesthesia.

    PubMed

    Chen, Zhe; Purdon, Patrick L; Pierce, Eric T; Harrell, Grace; Walsh, John; Salazar, Andres F; Tavares, Casie L; Brown, Emery N; Barbieri, Riccardo

    2009-01-01

    Quantitative evaluation of respiratory sinus arrhythmia (RSA) may provide important information in clinical practice of anesthesia and postoperative care. In this paper, we apply a point process method to assess dynamic RSA during propofol general anesthesia. Specifically, an inverse Gaussian probability distribution is used to model the heartbeat interval, whereas the instantaneous mean is identified by a linear or bilinear bivariate regression on the previous R-R intervals and respiratory measures. The estimated second-order bilinear interaction allows us to evaluate the nonlinear component of the RSA. The instantaneous RSA gain and phase can be estimated with an adaptive point process filter. The algorithm's ability to track non-stationary dynamics is demonstrated using one clinical recording. Our proposed statistical indices provide a valuable quantitative assessment of instantaneous cardiorespiratory control and heart rate variability (HRV) during general anesthesia.

  3. Generalized linear sampling method for elastic-wave sensing of heterogeneous fractures

    NASA Astrophysics Data System (ADS)

    Pourahmadian, Fatemeh; Guzina, Bojan B.; Haddar, Houssem

    2017-05-01

    A theoretical foundation is developed for the active seismic reconstruction of fractures endowed with spatially varying interfacial conditions (e.g. partially closed fractures, hydraulic fractures). The proposed indicator functional carries a superior localization property with no significant sensitivity to the fracture’s contact condition, measurement errors, or illumination frequency. This is accomplished through the paradigm of the {F}\\sharp -factorization technique and the recently developed generalized linear sampling method (GLSM) applied to elastodynamics. The direct scattering problem is formulated in the frequency domain where the fracture surface is illuminated by a set of incident plane waves, while monitoring the induced scattered field in the form of (elastic) far-field patterns. The analysis of the well-posedness of the forward problem leads to an admissibility condition on the fracture’s (linearized) contact parameters. This in turn contributes to the establishment of the applicability of the {F}\\sharp -factorization method, and consequently aids the formulation of a convex GLSM cost functional whose minimizer can be computed without iterations. Such a minimizer is then used to construct a robust fracture indicator function, whose performance is illustrated through a set of numerical experiments. For completeness, the results of the GLSM reconstruction are compared to those obtained by the classical linear sampling method (LSM).

  4. Neutron source strength measurements for Varian, Siemens, Elekta, and General Electric linear accelerators.

    PubMed

    Followill, David S; Stovall, Marilyn S; Kry, Stephen F; Ibbott, Geoffrey S

    2003-01-01

    The shielding calculations for high energy (>10 MV) linear accelerators must include the photoneutron production within the head of the accelerator. Procedures have been described to calculate the treatment room door shielding based on the neutron source strength (Q value) for a specific accelerator and energy combination. Unfortunately, there is currently little data in the literature stating the neutron source strengths for the most widely used linear accelerators. In this study, the neutron fluence for 36 linear accelerators, including models from Varian, Siemens, Elekta/Philips, and General Electric, was measured using gold-foil activation. Several of the models and energy combinations had multiple measurements. The neutron fluence measured in the patient plane was independent of the surface area of the room, suggesting that neutron fluence is more dependent on the direct neutron fluence from the head of the accelerator than from room scatter. Neutron source strength, Q, was determined from the measured neutron fluences. As expected, Q increased with increasing photon energy. The Q values ranged from 0.02 for a 10 MV beam to 1.44(x10(12)) neutrons per photon Gy for a 25 MV beam. The most comprehensive set of neutron source strength values, Q, for the current accelerators in clinical use are presented for use in calculating room shielding.

  5. Wave packet dynamics in one-dimensional linear and nonlinear generalized Fibonacci lattices.

    PubMed

    Zhang, Zhenjun; Tong, Peiqing; Gong, Jiangbin; Li, Baowen

    2011-05-01

    The spreading of an initially localized wave packet in one-dimensional linear and nonlinear generalized Fibonacci (GF) lattices is studied numerically. The GF lattices can be classified into two classes depending on whether or not the lattice possesses the Pisot-Vijayaraghavan property. For linear GF lattices of the first class, both the second moment and the participation number grow with time. For linear GF lattices of the second class, in the regime of a weak on-site potential, wave packet spreading is close to ballistic diffusion, whereas in the regime of a strong on-site potential, it displays stairlike growth in both the second moment and the participation number. Nonlinear GF lattices are then investigated in parallel. For the first class of nonlinear GF lattices, the second moment of the wave packet still grows with time, but the corresponding participation number does not grow simultaneously. For the second class of nonlinear GF lattices, an analogous phenomenon is observed for the weak on-site potential only. For a strong on-site potential that leads to an enhanced nonlinear self-trapping effect, neither the second moment nor the participation number grows with time. The results can be useful in guiding experiments on the expansion of noninteracting or interacting cold atoms in quasiperiodic optical lattices.

  6. Sparse matrix techniques applied to modal analysis of multi-section duct liners

    NASA Technical Reports Server (NTRS)

    Arnold, W. R.

    1975-01-01

    A simplified procedure is presented for analysis of ducts with discretely nonuniform properties. The analysis uses basis functions as the generalized coordinates. The duct eigenfunctions are approximated by finite series of these functions. The emphasis is on solution of the resulting large sparse set of linear equations. Characteristics of sparse matrix algorithms are outlined and some criteria for application are established. Analogies with structural methods are used to illustrate variations which can increase efficiency in generating values for design optimization routines. The effects of basis function selection, number of eigenfunctions and identification and ordering of equations on the sparsity and solution stability are included.

  7. Vectorized Sparse Elimination.

    DTIC Science & Technology

    1984-03-01

    Grids," Proc. 6th Symposium on Reservoir Simulation , New Orleans, Feb. 1-2, 1982, pp. 489-506. [51 Arya, S., and D. A. Calahan, "Optimal Scheduling of...of Computer Architecture on Direct Sparse Matrix Routines in Petroleum Reservoir Simulation ," Sparse Matrix Symposium, Fairfield Glade, TE, October

  8. Thermodynamic bounds and general properties of optimal efficiency and power in linear responses.

    PubMed

    Jiang, Jian-Hua

    2014-10-01

    We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches.

  9. Thermodynamic bounds and general properties of optimal efficiency and power in linear responses

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Hua

    2014-10-01

    We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N ×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches.

  10. Sparse Representation of Smooth Linear Operators

    DTIC Science & Technology

    1990-08-01

    received study by many authors, resulting in constructions with a variety of properties. Meyer [13] constructed orthonormal wavelets for which h E CI(R...Lemmas 2.3 and 2.4; in fact, substitution of the finite sums which determine the elements of UTUT for the integrals in those lemmas yields the...some k the orthogonal matrices U1,..., U, defined in Section 4.1 have been computed (1 = log2(n/k)). We now present a procedure for computation of UTUT

  11. Direct Solutions of Large, Sparse Linear Systems

    DTIC Science & Technology

    1977-12-01

    kiV W 2 Z.J@ gi z1 " ~Z iO P- I-SrP--NX VAJ 0 S Ci 0-4q., 4g ’’ p..CD wi I X I-- X X( 0 a. CD "M=0 4 W- W4 M maCJ P.-~ 4 4 r^ = = = obb 0I’ P4IUP so. Cat...r ;•i glll ,•-~l .j- , I• UNCLASSI FIED SECURIT ’, CLASSIFICATION OF THIS PAGE("Whw Does En.o ,,d) The performance comparison involves a wide rang of

  12. The heritability of general cognitive ability increases linearly from childhood to young adulthood.

    PubMed

    Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R

    2010-11-01

    Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities.

  13. On relating the generalized equivalent uniform dose formalism to the linear-quadratic model.

    PubMed

    Djajaputra, David; Wu, Qiuwen

    2006-12-01

    Two main approaches are commonly used in the literature for computing the equivalent uniform dose (EUD) in radiotherapy. The first approach is based on the cell-survival curve as defined in the linear-quadratic model. The second approach assumes that EUD can be computed as the generalized mean of the dose distribution with an appropriate fitting parameter. We have analyzed the connection between these two formalisms by deriving explicit formulas for the EUD which are applicable to normal distributions. From these formulas we have established an explicit connection between the two formalisms. We found that the EUD parameter has strong dependence on the parameters that characterize the distribution, namely the mean dose and the standard deviation around the mean. By computing the corresponding parameters for clinical dose distributions, which in general do not follow the normal distribution, we have shown that our results are also applicable to actual dose distributions. Our analysis suggests that caution should be used in using generalized EUD approach for reporting and analyzing dose distributions.

  14. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma (PPARG) gene associated with diabetes.

  15. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  16. Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.

    PubMed

    Elliott, Michael R

    2009-03-01

    In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.

  17. Two-stage method of estimation for general linear growth curve models.

    PubMed

    Stukel, T A; Demidenko, E

    1997-06-01

    We extend the linear random-effects growth curve model (REGCM) (Laird and Ware, 1982, Biometrics 38, 963-974) to study the effects of population covariates on one or more characteristics of the growth curve when the characteristics are expressed as linear combinations of the growth curve parameters. This definition includes the actual growth curve parameters (the usual model) or any subset of these parameters. Such an analysis would be cumbersome using standard growth curve methods because it would require reparameterization of the original growth curve. We implement a two-stage method of estimation based on the two-stage growth curve model used to describe the response. The resulting generalized least squares (GLS) estimator for the population parameters is consistent, asymptotically efficient, and multivariate normal when the number of individuals is large. It is also robust to model misspecification in terms of bias and efficiency of the parameter estimates compared to maximum likelihood with the usual REGCM. We apply the method to a study of factors affecting the growth rate of salmonellae in a cubic growth model, a characteristic that cannot be analyzed easily using standard techniques.

  18. Towards downscaling precipitation for Senegal - An approach based on generalized linear models and weather types

    NASA Astrophysics Data System (ADS)

    Rust, H. W.; Vrac, M.; Lengaigne, M.; Sultan, B.

    2012-04-01

    Changes in precipitation patterns with potentially less precipitation and an increasing risk for droughts pose a threat to water resources and agricultural yields in Senegal. Precipitation in this region is dominated by the West-African Monsoon being active from May to October, a seasonal pattern with inter-annual to decadal variability in the 20th century which is likely to be affected by climate change. We built a generalized linear model for a full spatial description of rainfall in Senegal. The model uses season, location, and a discrete set of weather types as predictors and yields a spatially continuous description of precipitation occurrences and intensities. Weather types have been defined on NCEP/NCAR reanalysis using zonal and meridional winds, as well as relative humidity. This model is suitable for downscaling precipitation, particularly precipitation occurrences relevant for drough risk mapping.

  19. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  20. Analysis of linear two-dimensional general rate model for chromatographic columns of cylindrical geometry.

    PubMed

    Qamar, Shamsul; Uche, David U; Khan, Farman U; Seidel-Morgenstern, Andreas

    2017-05-05

    This work is concerned with the analytical solutions and moment analysis of a linear two-dimensional general rate model (2D-GRM) describing the transport of a solute through a chromatographic column of cylindrical geometry. Analytical solutions are derived through successive implementation of finite Hankel and Laplace transformations for two different sets of boundary conditions. The process is further analyzed by deriving analytical temporal moments from the Laplace domain solutions. Radial gradients are typically neglected in liquid chromatography studies which are particularly important in the case of non-perfect injections. Several test problems of single-solute transport are considered. The derived analytical results are validated against the numerical solutions of a high resolution finite volume scheme. The derived analytical results can play an important role in further development of liquid chromatography. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Generalization of the ordinary state-based peridynamic model for isotropic linear viscoelasticity

    NASA Astrophysics Data System (ADS)

    Delorme, Rolland; Tabiai, Ilyass; Laberge Lebel, Louis; Lévesque, Martin

    2017-02-01

    This paper presents a generalization of the original ordinary state-based peridynamic model for isotropic linear viscoelasticity. The viscoelastic material response is represented using the thermodynamically acceptable Prony series approach. It can feature as many Prony terms as required and accounts for viscoelastic spherical and deviatoric components. The model was derived from an equivalence between peridynamic viscoelastic parameters and those appearing in classical continuum mechanics, by equating the free energy densities expressed in both frameworks. The model was simplified to a uni-dimensional expression and implemented to simulate a creep-recovery test. This implementation was finally validated by comparing peridynamic predictions to those predicted from classical continuum mechanics. An exact correspondence between peridynamics and the classical continuum approach was shown when the peridynamic horizon becomes small, meaning peridynamics tends toward classical continuum mechanics. This work provides a clear and direct means to researchers dealing with viscoelastic phenomena to tackle their problem within the peridynamic framework.

  2. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    PubMed

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  3. Constraining the general linear model for sensible hemodynamic response function waveforms.

    PubMed

    Ciftçi, Koray; Sankur, Bülent; Kahya, Yasemin P; Akin, Ata

    2008-08-01

    We propose a method to do constrained parameter estimation and inference from neuroimaging data using general linear model (GLM). Constrained approach precludes unrealistic hemodynamic response function (HRF) estimates to appear at the outcome of the GLM analysis. The permissible ranges of waveform parameters were determined from the study of a repertoire of plausible waveforms. These parameter intervals played the role of prior distributions in the subsequent Bayesian analysis of the GLM, and Gibbs sampling was used to derive posterior distributions. The method was applied to artificial null data and near infrared spectroscopy (NIRS) data. The results show that constraining the GLM eliminates unrealistic HRF waveforms and decreases false activations, without affecting the inference for "realistic" activations, which satisfy the constraints.

  4. Compact tunable silicon photonic differential-equation solver for general linear time-invariant systems.

    PubMed

    Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai

    2014-10-20

    We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.

  5. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  6. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  7. Structured sparse models for classification

    NASA Astrophysics Data System (ADS)

    Castrodad, Alexey

    The main focus of this thesis is the modeling and classification of high dimensional data using structured sparsity. Sparse models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and its use has led to state-of-the-art results in many signal and image processing tasks. The success of sparse modeling is highly due to its ability to efficiently use the redundancy of the data and find its underlying structure. On a classification setting, we capitalize on this advantage to properly model and separate the structure of the classes. We design and validate modeling solutions to challenging problems arising in computer vision and remote sensing. We propose both supervised and unsupervised schemes for the modeling of human actions from motion imagery under a wide variety of acquisition condi- tions. In the supervised case, the main goal is to classify the human actions in the video given a predefined set of actions to learn from. In the unsupervised case, the main goal is to an- alyze the spatio-temporal dynamics of the individuals in the scene without having any prior information on the actions themselves. We also propose a model for remotely sensed hysper- spectral imagery, where the main goal is to perform automatic spectral source separation and mapping at the subpixel level. Finally, we present a sparse model for sensor fusion to exploit the common structure and enforce collaboration of hyperspectral with LiDAR data for better mapping capabilities. In all these scenarios, we demonstrate that these data can be expressed as a combination of atoms from a class-structured dictionary. These data representation becomes essentially a "mixture of classes," and by directly exploiting the sparse codes, one can attain highly accurate classification performance with relatively unsophisticated classifiers.

  8. Dose-shaping using targeted sparse optimization

    SciTech Connect

    Sayre, George A.; Ruan, Dan

    2013-07-15

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot

  9. Dose-shaping using targeted sparse optimization.

    PubMed

    Sayre, George A; Ruan, Dan

    2013-07-01

    Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method. In designing the energy minimization objective (E tot (sparse)), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L1 norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E tot (sparse) improves tradeoff between

  10. Evolving sparse stellar populations

    NASA Astrophysics Data System (ADS)

    Bruzual, Gustavo; Gladis Magris, C.; Hernández-Pérez, Fabiola

    2017-03-01

    We examine the role that stochastic fluctuations in the IMF and in the number of interacting binaries have on the spectro-photometric properties of sparse stellar populations as a function of age and metallicity.

  11. Multichannel sparse spike inversion

    NASA Astrophysics Data System (ADS)

    Pereg, Deborah; Cohen, Israel; Vassiliou, Anthony A.

    2017-10-01

    In this paper, we address the problem of sparse multichannel seismic deconvolution. We introduce multichannel sparse spike inversion as an iterative procedure, which deconvolves the seismic data and recovers the Earth two-dimensional reflectivity image, while taking into consideration the relations between spatially neighboring traces. We demonstrate the improved performance of the proposed algorithm and its robustness to noise, compared to competitive single-channel algorithm through simulations and real seismic data examples.

  12. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  13. Sparse recovery via convex optimization

    NASA Astrophysics Data System (ADS)

    Randall, Paige Alicia

    This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization

  14. Sparse Regression by Projection and Sparse Discriminant Analysis.

    PubMed

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J; Zhao, Hongyu

    2015-04-01

    Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared to the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplemental materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  15. Sparse Regression by Projection and Sparse Discriminant Analysis

    PubMed Central

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J.; Zhao, Hongyu

    2014-01-01

    Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared to the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplemental materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided. PMID:26345204

  16. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .

  17. General linear response formula for non integrable systems obeying the Vlasov equation

    NASA Astrophysics Data System (ADS)

    Patelli, Aurelio; Ruffo, Stefano

    2014-11-01

    Long-range interacting N-particle systems get trapped into long-living out-of-equilibrium stationary states called quasi-stationary states (QSS). We study here the response to a small external perturbation when such systems are settled into a QSS. In the N → ∞ limit the system is described by the Vlasov equation and QSS are mapped into stable stationary solutions of such equation. We consider this problem in the context of a model that has recently attracted considerable attention, the Hamiltonian mean field (HMF) model. For such a model, stationary inhomogeneous and homogeneous states determine an integrable dynamics in the mean-field effective potential and an action-angle transformation allows one to derive an exact linear response formula. However, such a result would be of limited interest if restricted to the integrable case. In this paper, we show how to derive a general linear response formula which does not use integrability as a requirement. The presence of conservation laws (mass, energy, momentum, etc.) and of further Casimir invariants can be imposed a posteriori. We perform an analysis of the infinite time asymptotics of the response formula for a specific observable, the magnetization in the HMF model, as a result of the application of an external magnetic field, for two stationary stable distributions: the Boltzmann-Gibbs equilibrium distribution and the Fermi-Dirac one. When compared with numerical simulations the predictions of the theory are very good away from the transition energy from inhomogeneous to homogeneous states. Contribution to the Topical Issue "Theory and Applications of the Vlasov Equation", edited by Francesco Pegoraro, Francesco Califano, Giovanni Manfredi and Philip J. Morrison.

  18. A simulation study of confounding in generalized linear models for air pollution epidemiology.

    PubMed Central

    Chen, C; Chock, D P; Winkler, S L

    1999-01-01

    Confounding between the model covariates and causal variables (which may or may not be included as model covariates) is a well-known problem in regression models used in air pollution epidemiology. This problem is usually acknowledged but hardly ever investigated, especially in the context of generalized linear models. Using synthetic data sets, the present study shows how model overfit, underfit, and misfit in the presence of correlated causal variables in a Poisson regression model affect the estimated coefficients of the covariates and their confidence levels. The study also shows how this effect changes with the ranges of the covariates and the sample size. There is qualitative agreement between these study results and the corresponding expressions in the large-sample limit for the ordinary linear models. Confounding of covariates in an overfitted model (with covariates encompassing more than just the causal variables) does not bias the estimated coefficients but reduces their significance. The effect of model underfit (with some causal variables excluded as covariates) or misfit (with covariates encompassing only noncausal variables), on the other hand, leads to not only erroneous estimated coefficients, but a misguided confidence, represented by large t-values, that the estimated coefficients are significant. The results of this study indicate that models which use only one or two air quality variables, such as particulate matter [less than and equal to] 10 microm and sulfur dioxide, are probably unreliable, and that models containing several correlated and toxic or potentially toxic air quality variables should also be investigated in order to minimize the situation of model underfit or misfit. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 PMID:10064552

  19. Assessing erectile neurogenic dysfunction from heart rate variability through a Generalized Linear Mixed Model framework.

    PubMed

    Fernández, Elmer Andrés; Souza Neto, E P; Abry, P; Macchiavelli, R; Balzarini, M; Cuzin, B; Baude, C; Frutoso, J; Gharib, C

    2010-07-01

    The low (LF) vs. high (HF) frequency energy ratio, computed from the spectral decomposition of heart beat intervals, has become a major tool in cardiac autonomic system control and sympatho-vagal balance studies. The (statistical) distributions of response variables designed from ratios of two quantities, such as the LF/HF ratio, are likely to non-normal, hence preventing e.g., from a relevant use of the t-test. Even using a non-parametric formulation, the solution may be not appropriate as the test statistics do not account for correlation and heteroskedasticity, such as those that can be observed when several measures are taken from the same patient. The analyses for such type of data require the application of statistical models which do not assume a priori independence. In this spirit, the present contribution proposes the use of the Generalized Linear Mixed Models (GLMMs) framework to assess differences between groups of measures performed over classes of patients. Statistical linear mixed models allow the inclusion of at least one random effect, besides the error term, which induces correlation between observations from the same subject. Moreover, by using GLMM, practitioners could assume any probability distribution, within the exponential family, for the data, and naturally model heteroskedasticity. Here, the sympatho-vagal balance expressed as LF/HF ratio of patients suffering neurogenic erectile dysfunction under three different body positions was analyzed in a case-control protocol by means of a GLMM under gamma and Gaussian distributed responses assumptions. The gamma GLMM model was compared with the normal linear mixed model (LMM) approach conducted using raw and log transformed data. Both raw GLMM gamma and log transformed LMM allow better inference for factor effects, including correlations between observations from the same patient under different body position compared to the raw LMM. The gamma GLMM provides a more natural distribution assumption

  20. Sparse extreme learning machine for classification.

    PubMed

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM.

  1. Sparse Extreme Learning Machine for Classification

    PubMed Central

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M. Brandon

    2016-01-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM. PMID:25222727

  2. Sparse matrix-vector multiplication on a reconfigurable supercomputer

    SciTech Connect

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M; Connor, Carolyn M; Poole, Steve

    2008-01-01

    Double precision floating point Sparse Matrix-Vector Multiplication (SMVM) is a critical computational kernel used in iterative solvers for systems of sparse linear equations. The poor data locality exhibited by sparse matrices along with the high memory bandwidth requirements of SMVM result in poor performance on general purpose processors. Field Programmable Gate Arrays (FPGAs) offer a possible alternative with their customizable and application-targeted memory sub-system and processing elements. In this work we investigate two separate implementations of the SMVM on an SRC-6 MAPStation workstation. The first implementation investigates the peak performance capability, while the second implementation balances the amount of instantiated logic with the available sustained bandwidth of the FPGA subsystem. Both implementations yield the same sustained performance with the second producing a much more efficient solution. The metrics of processor and application balance are introduced to help provide some insight into the efficiencies of the FPGA and CPU based solutions explicitly showing the tight coupling of the available bandwidth to peak floating point performance. Due to the FPGA's ability to balance the amount of implemented logic to the available memory bandwidth it can provide a much more efficient solution. Finally, making use of the lessons learned implementing the SMVM, we present an fully implemented nonpreconditioned Conjugate Gradient Algorithm utilizing the second SMVM design.

  3. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  4. Efficient analysis of Q-level nested hierarchical general linear models given ignorable missing data.

    PubMed

    Shin, Yongyun; Raudenbush, Stephen W

    2013-09-28

    This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children.

  5. Predicting estuarine use patterns of juvenile fish with Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    Vasconcelos, R. P.; Le Pape, O.; Costa, M. J.; Cabral, H. N.

    2013-03-01

    Statistical models are key for estimating fish distributions based on environmental variables, and validation is generally advocated as indispensable but seldom applied. Generalized Linear Models were applied to distributions of juvenile Solea solea, Solea senegalensis, Platichthys flesus and Dicentrarchus labrax in response to environmental variables throughout Portuguese estuaries. Species-specific Delta models with two sub-models were used: Binomial (presence/absence); Gamma (density when present). Models were fitted and tested on separate data sets to estimate the accuracy and robustness of predictions. Temperature, salinity and mud content in sediment were included in most models for presence/absence; salinity and depth in most models for density (when present). In Binomial models (presence/absence), goodness-of-fit, accuracy and robustness varied concurrently among species, and fair to high accuracy and robustness were attained for all species, in models with poor to high goodness-of-fit. But in Gamma models (density when present), goodness-of-fit was not indicative of accuracy and robustness. Only for Platichthys flesus were Gamma and also coupled Delta models (density) accurate and robust, despite some moderate bias and inconsistency in predicted density. The accuracy and robustness of final density estimations were defined by the accuracy and robustness of the estimations of presence/absence and density (when present) provided by the sub-models. The mismatches between goodness-of-fit, accuracy and robustness of positive density models, as well as the difference in performance of presence/absence and density models demonstrated the importance of validation procedures in the evaluation of the value of habitat suitability models as predictive tools.

  6. The overlooked potential of Generalized Linear Models in astronomy-II: Gamma regression and photometric redshifts

    NASA Astrophysics Data System (ADS)

    Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.

    2015-04-01

    Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.

  7. Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation

    NASA Technical Reports Server (NTRS)

    Moore, T. E.; Khazanov, G. V.

    2011-01-01

    Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].

  8. Efficient Analysis of Q-Level Nested Hierarchical General Linear Models Given Ignorable Missing Data

    PubMed Central

    Shin, Yongyun; Raudenbush, Stephen W.

    2014-01-01

    This paper extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed; and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model, and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children. PMID:24077621

  9. Three-photon circular dichroism: towards a generalization of chiroptical non-linear light absorption.

    PubMed

    Friese, Daniel H; Ruud, Kenneth

    2016-02-07

    We present the theory of three-photon circular dichroism (3PCD), a novel non-linear chiroptical property not yet described in the literature. We derive the observable absorption cross section including the orientational average of the necessary seventh-rank tensors and provide origin-independent expressions for 3PCD using either a velocity-gauge treatment of the electric dipole operator or a length-gauge formulation using London atomic orbitals. We present the first numerical results for hydrogen peroxide, 3-methylcyclopentanone (MCP) and 4-helicene, including also a study of the origin dependence and basis set convergence of 3PCD. We find that for the 3PCD-brightest low-lying Rydberg state of hydrogen peroxide, the dichroism is extremely basis set dependent, with basis set convergence not being reached before a sextuple-zeta basis is used, whereas for the MCP and 4-helicene molecules, the basis set dependence is more moderate and at the triple-zeta level the 3PCD contributions are more or less converged irrespective of whether the considered states are Rydberg states or not. The character of the 3PCD-brightest states in MCP is characterized by a fairly large charge-transfer character from the carbonyl group to the ring system. In general, the quadrupole contributions to 3PCD are found to be very small.

  10. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  11. Assessment of cross-frequency coupling with confidence using generalized linear models

    PubMed Central

    Kramer, M. A.; Eden, U. T.

    2013-01-01

    Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829

  12. Projecting nuisance flooding in a warming climate using generalized linear models and Gaussian processes

    NASA Astrophysics Data System (ADS)

    Vandenberg-Rodes, Alexander; Moftakhari, Hamed R.; AghaKouchak, Amir; Shahbaba, Babak; Sanders, Brett F.; Matthew, Richard A.

    2016-11-01

    Nuisance flooding corresponds to minor and frequent flood events that have significant socioeconomic and public health impacts on coastal communities. Yearly averaged local mean sea level can be used as proxy to statistically predict the impacts of sea level rise (SLR) on the frequency of nuisance floods (NFs). In this study, we use generalized linear models (GLM) and Gaussian Process (GP) models combined to (i) estimate the frequency of NF associated with the change in mean sea level, and (ii) quantify the associated uncertainties via a novel and statistically robust approach. We calibrate our models to the water level data from 18 tide gauges along the coasts of United States, and after validation, we estimate the frequency of NF associated with the SLR projections in year 2030 (under RCPs 2.6 and 8.5), along with their 90% bands, at each gauge. The historical NF-SLR data are very noisy, and show large changes in variability (heteroscedasticity) with SLR. Prior models in the literature do not properly account for the observed heteroscedasticity, and thus their projected uncertainties are highly suspect. Among the models used in this study, the Negative Binomial Distribution GLM with GP best characterizes the uncertainties associated with NF estimates; on validation data ≈93% of the points fall within the 90% credible limit, showing our approach to be a robust model for uncertainty quantification.

  13. The overlooked potential of Generalized Linear Models in astronomy, I: Binomial regression

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Cameron, E.; Killedar, M.; Hilbe, J.; Vilalta, R.; Maio, U.; Biffi, V.; Ciardi, B.; Riggs, J. D.

    2015-09-01

    Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific enquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper-the first in a series aimed at illustrating the power of these methods in astronomical applications-we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈ 1.3 × 10-4Z⨀, an increase of 1.2 × 10-2 in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.

  14. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  15. Establishment of a new initial dose plan for vancomycin using the generalized linear mixed model.

    PubMed

    Kourogi, Yasuyuki; Ogata, Kenji; Takamura, Norito; Tokunaga, Jin; Setoguchi, Nao; Kai, Mitsuhiro; Tanaka, Emi; Chiyotanda, Susumu

    2017-04-08

    When administering vancomycin hydrochloride (VCM), the initial dose is adjusted to ensure that the steady-state trough value (Css-trough) remains within the effective concentration range. However, the Css-trough (population mean method predicted value [PMMPV]) calculated using the population mean method (PMM) often deviate from the effective concentration range. In this study, we used the generalized linear mixed model (GLMM) for initial dose planning to create a model that accurately predicts Css-trough, and subsequently assessed its prediction accuracy. The study included 46 subjects whose trough values were measured after receiving VCM. We calculated the Css-trough (Bayesian estimate predicted value [BEPV]) from the Bayesian estimates of trough values. Using the patients' medical data, we created models that predict the BEPV and selected the model with minimum information criterion (GLMM best model). We then calculated the Css-trough (GLMMPV) from the GLMM best model and compared the BEPV correlation with GLMMPV and with PMMPV. The GLMM best model was {[0.977 + (males: 0.029 or females: -0.081)] × PMMPV + 0.101 × BUN/adjusted SCr - 12.899 × SCr adjusted amount}. The coefficients of determination for BEPV/GLMMPV and BEPV/PMMPV were 0.623 and 0.513, respectively. We demonstrated that the GLMM best model was more accurate in predicting the Css-trough than the PMM.

  16. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.

  17. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  18. A generalized linear model for peak calling in ChIP-Seq data.

    PubMed

    Xu, Jialin; Zhang, Yu

    2012-06-01

    Chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) has become a routine for detecting genome-wide protein-DNA interaction. The success of ChIP-Seq data analysis highly depends on the quality of peak calling (i.e., to detect peaks of tag counts at a genomic location and evaluate if the peak corresponds to a real protein-DNA interaction event). The challenges in peak calling include (1) how to combine the forward and the reverse strand tag data to improve the power of peak calling and (2) how to account for the variation of tag data observed across different genomic locations. We introduce a new peak calling method based on the generalized linear model (GLMNB) that utilizes negative binomial distribution to model the tag count data and account for the variation of background tags that may randomly bind to the DNA sequence at varying levels due to local genomic structures and sequence contents. We allow local shifting of peaks observed on the forward and the reverse stands, such that at each potential binding site, a binding profile representing the pattern of a real peak signal is fitted to best explain the observed tag data with maximum likelihood. Our method can also detect multiple peaks within a local region if there are multiple binding sites in the region.

  19. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    PubMed Central

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  20. Generalized linear discriminant analysis: a unified framework and efficient model selection.

    PubMed

    Ji, Shuiwang; Ye, Jieping

    2008-10-01

    High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.

  1. Profile local linear estimation of generalized semiparametric regression model for longitudinal data

    PubMed Central

    Sun, Liuquan; Zhou, Jie

    2013-01-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A K -fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example. PMID:23471814

  2. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  3. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications.

  4. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  5. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  6. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  7. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  8. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    SciTech Connect

    Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-05-15

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  9. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  10. Sparse representation of whole-brain fMRI signals for identification of functional networks.

    PubMed

    Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming

    2015-02-01

    There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Fast Solution in Sparse LDA for Binary Classification

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic

  12. A general linear model-based approach for inferring selection to climate

    PubMed Central

    2013-01-01

    Background Many efforts have been made to detect signatures of positive selection in the human genome, especially those associated with expansion from Africa and subsequent colonization of all other continents. However, most approaches have not directly probed the relationship between the environment and patterns of variation among humans. We have designed a method to identify regions of the genome under selection based on Mantel tests conducted within a general linear model framework, which we call MAntel-GLM to Infer Clinal Selection (MAGICS). MAGICS explicitly incorporates population-specific and genome-wide patterns of background variation as well as information from environmental values to provide an improved picture of selection and its underlying causes in human populations. Results Our results significantly overlap with those obtained by other published methodologies, but MAGICS has several advantages. These include improvements that: limit false positives by reducing the number of independent tests conducted and by correcting for geographic distance, which we found to be a major contributor to selection signals; yield absolute rather than relative estimates of significance; identify specific geographic regions linked most strongly to particular signals of selection; and detect recent balancing as well as directional selection. Conclusions We find evidence of selection associated with climate (P < 10-5) in 354 genes, and among these observe a highly significant enrichment for directional positive selection. Two of our strongest 'hits’, however, ADRA2A and ADRA2C, implicated in vasoconstriction in response to cold and pain stimuli, show evidence of balancing selection. Our results clearly demonstrate evidence of climate-related signals of directional and balancing selection. PMID:24053227

  13. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  14. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  15. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses.

  16. Protein structure validation by generalized linear model root-mean-square deviation prediction.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter

    2012-02-01

    Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores.

  17. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    PubMed

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Modeling psychophysical data at the population-level: the generalized linear mixed model.

    PubMed

    Moscatelli, Alessandro; Mezzetti, Maura; Lacquaniti, Francesco

    2012-10-25

    In psychophysics, researchers usually apply a two-level model for the analysis of the behavior of the single subject and the population. This classical model has two main disadvantages. First, the second level of the analysis discards information on trial repetitions and subject-specific variability. Second, the model does not easily allow assessing the goodness of fit. As an alternative to this classical approach, here we propose the Generalized Linear Mixed Model (GLMM). The GLMM separately estimates the variability of fixed and random effects, it has a higher statistical power, and it allows an easier assessment of the goodness of fit compared with the classical two-level model. GLMMs have been frequently used in many disciplines since the 1990s; however, they have been rarely applied in psychophysics. Furthermore, to our knowledge, the issue of estimating the point-of-subjective-equivalence (PSE) within the GLMM framework has never been addressed. Therefore the article has two purposes: It provides a brief introduction to the usage of the GLMM in psychophysics, and it evaluates two different methods to estimate the PSE and its variability within the GLMM framework. We compare the performance of the GLMM and the classical two-level model on published experimental data and simulated data. We report that the estimated values of the parameters were similar between the two models and Type I errors were below the confidence level in both models. However, the GLMM has a higher statistical power than the two-level model. Moreover, one can easily compare the fit of different GLMMs according to different criteria. In conclusion, we argue that the GLMM can be a useful method in psychophysics.

  19. Predicting stem borer density in maize using RapidEye data and generalized linear models

    NASA Astrophysics Data System (ADS)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  20. A generalized harmonic balance method for forced non-linear oscillations: the subharmonic cases

    NASA Astrophysics Data System (ADS)

    Wu, J. J.

    1992-12-01

    This paper summarizes and extends results in two previous papers, published in conference proceedings, on a variant of the generalized harmonic balance method (GHB) and its application to obtain subharmonic solutions of forced non-linear oscillation problems. This method was introduced as an alternative to the method of multiple scales, and it essentially consists of two parts. First, the part of the multiple scales method used to reduce the problem to a set of differential equations is used to express the solution as a sum of terms of various harmonics with unknown, time dependent coefficients. Second, the form of solution so obtained is substituted into the original equation and the coefficients of each harmonic are set to zero. Key equations of approximations for a subharmonic case are derived for the cases of both "small" damping and excitations, and "Large" damping and excitations, which are shown to be identical, in the intended order of approximation, to those obtained by Nayfeh using the method of multiple scales. Detailed numerical formulations, including the derivation of the initial conditions, are presented, as well as some numerical results for the frequency-response relations and the time evolution of various harmonic components. Excellent agreement is demonstrated between results by GHB and by integrating the original differential equation directly. The improved efficiency in obtaining numerical solutions using GHB as compared with integrating the original differential equation is demonstrated also. For the case of large damping and excitations and for non-trivial solutions, it is noted that there exists a threshold value of the force beyond which no subharmonic excitations are possible.

  1. Determinants of hospital closure in South Korea: use of a hierarchical generalized linear model.

    PubMed

    Noh, Maengseok; Lee, Youngjo; Yun, Sung-Cheol; Lee, Sang-Il; Lee, Moo-Song; Khang, Young-Ho

    2006-11-01

    Understanding causes of hospital closure is important if hospitals are to survive and continue to fulfill their missions as the center for health care in their neighborhoods. Knowing which hospitals are most susceptible to closure can be of great use for hospital administrators and others interested in hospital performance. Although prior studies have identified a range of factors associated with increased risk of hospital closure, most are US-based and do not directly relate to health care systems in other countries. We examined determinants of hospital closure in a nationally representative sample: 805 hospitals established in South Korea before 1996 were examined-hospitals established in 1996 or after were excluded. Major organizational changes (survival vs. closure) were followed for all South Korean hospitals from 1996 through 2002. With the use of a hierarchical generalized linear model, a frailty model was used to control correlation among repeated measurements for risk factors for hospital closure. Results showed that ownership and hospital size were significantly associated with hospital closure. Urban hospitals were less likely to close than rural hospitals. However, the urban location of a hospital was not associated with hospital closure after adjustment for the proportion of elderly. Two measures for hospital competition (competitive beds and 1-Hirshman--Herfindalh index) were positively associated with risk of hospital closure before and after adjustment for confounders. In addition, annual 10% change in competitive beds was significantly predictive of hospital closure. In conclusion, yearly trends in hospital competition as well as the level of hospital competition each year affected hospital survival. Future studies need to examine the contribution of internal factors such as management strategies and financial status to hospital closure in South Korea.

  2. The linear co-variance between joint muscle torques is not a generalized principle.

    PubMed

    Sande de Souza, Luciane Aparecida Pascucci; Dionísio, Valdeci Carlos; Lerena, Mario Adrian Misailidis; Marconi, Nadia Fernanda; Almeida, Gil Lúcio

    2009-06-01

    In 1996, Gottlieb et al. [Gottlieb GL, Song Q, Hong D, Almeida GL, Corcos DM. Coordinating movement at two joints: A principle of linear covariance. J Neurophysiol 1996;75(4):1760-4] identified a linear co-variance between the joint muscle torques generated at two connected joints. The joint muscle torques changed directions and magnitudes in a synchronized and linear fashion and called it the principle of linear co-variance. Here we showed that this principle cannot hold for some class of movements. Neurologically normal subjects performed multijoint movements involving elbow and shoulder with reversal towards three targets in the sagittal plane without any constraints. The movement kinematics was calculated using the X and Y coordinates of the markers positioned over the joints. Inverse dynamics was used to calculate the joint muscle, interaction and net torques. We found that for the class of voluntary movements analyzed, the joint muscle torques of the elbow and the shoulder were not linearly correlated. The same was observed for the interaction torques. But, the net torques at both joints, i.e., the sum of the interaction and the joint muscle torques were linearly correlated. We showed that by decoupling the joint muscle torques, but keeping the net torques linearly correlated, the CNS was able to generate fast and accurate movements with straight fingertip paths. The movement paths were typical of the ones in which the joint muscle torques were linearly correlated.

  3. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  4. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  5. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  6. On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

    DTIC Science & Technology

    2012-08-01

    This paper shows that global linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the...linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the two functions, along with certain...extensive literature on the ADM and its applications , there are very few results on its rate of convergence until the very recent past. Work [13] shows

  7. On the dynamics of canopy resistance: Generalized linear estimation and relationships with primary micrometeorological variables

    NASA Astrophysics Data System (ADS)

    Irmak, Suat; Mutiibwa, Denis

    2010-08-01

    The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the

  8. Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao

    2016-02-01

    We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and

  9. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Treesearch

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  10. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  11. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca.

  12. Learning sparse representations for human action recognition.

    PubMed

    Guha, Tanaya; Ward, Rabab Kreidieh

    2012-08-01

    This paper explores the effectiveness of sparse representations obtained by learning a set of overcomplete basis (dictionary) in the context of action recognition in videos. Although this work concentrates on recognizing human movements-physical actions as well as facial expressions-the proposed approach is fairly general and can be used to address other classification problems. In order to model human actions, three overcomplete dictionary learning frameworks are investigated. An overcomplete dictionary is constructed using a set of spatio-temporal descriptors (extracted from the video sequences) in such a way that each descriptor is represented by some linear combination of a small number of dictionary elements. This leads to a more compact and richer representation of the video sequences compared to the existing methods that involve clustering and vector quantization. For each framework, a novel classification algorithm is proposed. Additionally, this work also presents the idea of a new local spatio-temporal feature that is distinctive, scale invariant, and fast to compute. The proposed approach repeatedly achieves state-of-the-art results on several public data sets containing various physical actions and facial expressions.

  13. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    SciTech Connect

    Shirokov, M. E.

    2013-11-15

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  14. GENERAL: A Study on Stochastic Resonance in Biased Subdiffusive Smoluchowski Systems within Linear Response Range

    NASA Astrophysics Data System (ADS)

    Li, Yi-Juan; Kang, Yan-Mei

    2010-08-01

    The method of matrix continued fraction is used to investigate stochastic resonance (SR) in the biased subdiffusive Smoluchowski system within linear response range. Numerical results of linear dynamic susceptibility and spectral amplification factor are presented and discussed in two-well potential and mono-well potential with different subdiffusion exponents. Following our observation, the introduction of a bias in the potential weakens the SR effect in the subdiffusive system just as in the normal diffusive case. Our observation also discloses that the subdiffusion inhibits the low-frequency SR, but it enhances the high-frequency SR in the biased Smoluchowski system, which should reflect a “flattening" influence of the subdiffusion on the linear susceptibility.

  15. Reversibility of a quantum channel: General conditions and their applications to Bosonic linear channels

    NASA Astrophysics Data System (ADS)

    Shirokov, M. E.

    2013-11-01

    The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.

  16. Evidence for the conjecture that sampling generalized cat states with linear optics is hard

    NASA Astrophysics Data System (ADS)

    Rohde, Peter P.; Motes, Keith R.; Knott, Paul A.; Fitzsimons, Joseph; Munro, William J.; Dowling, Jonathan P.

    2015-01-01

    Boson sampling has been presented as a simplified model for linear optical quantum computing. In the boson-sampling model, Fock states are passed through a linear optics network and sampled via number-resolved photodetection. It has been shown that this sampling problem likely cannot be efficiently classically simulated. This raises the question as to whether there are other quantum states of light for which the equivalent sampling problem is also computationally hard. We present evidence, without using a full complexity proof, that a very broad class of quantum states of light—arbitrary superpositions of two or more coherent states—when evolved via passive linear optics and sampled with number-resolved photodetection, likely implements a classically hard sampling problem.

  17. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  18. Sparse inpainting and isotropy

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Marinucci, Domenico; McEwen, Jason D.; Peiris, Hiranya V.; Wandelt, Benjamin D.; Cammarota, Valentina

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  19. Bayesian sparse channel estimation

    NASA Astrophysics Data System (ADS)

    Chen, Chulong; Zoltowski, Michael D.

    2012-05-01

    In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.

  20. A generalized hybrid transfinite element computational approach for nonlinear/linear unified thermal/structural analysis

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.

  1. Non-linear generalization of the relativistic Schrödinger equations.

    NASA Astrophysics Data System (ADS)

    Ochs, U.; Sorg, M.

    1996-09-01

    The theory of the relativistic Schrödinger equations is further developped and extended to non-linear field equations. The technical advantage of the relativistic Schroedinger approach is demonstrated explicitly by solving the coupled Einstein-Klein-Gordon equations including a non-linear Higgs potential in case of a Robertson-Walker universe. The numerical results yield the effect of dynamical self-diagonalization of the Hamiltonian which corresponds to a kind of quantum de-coherence being enabled by the inflation of the universe.

  2. Linear ion trap with a deterministic voltage of the general form

    NASA Astrophysics Data System (ADS)

    Rozhdestvenskii, Yu. V.; Rudyi, S. S.

    2017-04-01

    An analysis of the stability zones of a linear ion trap in the case of applying the voltage of the common form to the electrodes has been presented. The possibility of the localization of ions for specific types of periodic (but not harmonic) signals has been investigated. It has been shown that, when changing the types of temporal functions of the applied voltage the control by both trapping and dynamics of ions in a linear radiofrequency (RF) trap occurs, while preserving its design. The latest developments present new possibilities of implementing devices based on single ions, e.g., quantum frequency standards and quantum processors.

  3. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  4. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  5. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  6. Protein family classification using sparse Markov transducers.

    PubMed

    Eskin, E; Grundy, W N; Singer, Y

    2000-01-01

    In this paper we present a method for classifying proteins into families using sparse Markov transducers (SMTs). Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Because substitutions of amino acids are common in protein families, incorporating wildcards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. We also present efficient data structures to improve the memory usage of the models. We evaluate SMTs by building protein family classifiers using the Pfam database and compare our results to previously published results.

  7. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy

    PubMed Central

    Huppert, Theodore J.

    2016-01-01

    Abstract. Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts. PMID:26989756

  8. Hierarchical Generalized Linear Models for Multiple Groups of Rare and Common Variants: Jointly Estimating Group and Individual-Variant Effects

    PubMed Central

    Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun

    2011-01-01

    Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906

  9. Recent advances toward a general purpose linear-scaling quantum force field.

    PubMed

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  10. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  11. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  12. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    ERIC Educational Resources Information Center

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  13. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  14. Some Numerical Methods for Exponential Analysis with Connection to a General Identification Scheme for Linear Processes

    DTIC Science & Technology

    1980-11-01

    generalized nodel described by Eykhoff [1, 2], Astrom and Eykhoff [3], and on pages 209-220 of Eykhoff [4]. The origin of the general- ized model can be...aspects of process-parameter estimation," IEEE Trans. Auto. Control, October 1963, pp. 347-357. 3. K. J. Astrom and P. Eykhoff, "System

  15. Complexity of Dense Linear System Solution on a Multiprocessor Ring.

    DTIC Science & Technology

    1985-01-01

    Lawrie and Sameh [4] present a technique for solving symmetric positive definite banded systems, which is a generalization of a method for tridiagonal...Numerical Linear Algebra, SIAM Review. 20 (1978), pp. 740-777. [4] D. Lawrie, A.H. Sameh , The Computation and Communication Complexity of a Parallel Banded...409. [7] , Parallel, Iterative Solution of Sparse Linear Systems : Models and Architectures. Technical Report 84-35, ICASE, 1984. (81 A.H. Sameh , On

  16. Expected Estimating Equation using Calibration Data for Generalized Linear Models with a Mixture of Berkson and Classical Errors in Covariates

    PubMed Central

    de Dieu Tapsoba, Jean; Lee, Shen-Ming; Wang, Ching-Yun

    2013-01-01

    Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. Its finite-sample performance is investigated numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099

  17. CABARET scheme for the numerical solution of aeroacoustics problems: Generalization to linearized one-dimensional Euler equations

    NASA Astrophysics Data System (ADS)

    Goloviznin, V. M.; Karabasov, S. A.; Kozubskaya, T. K.; Maksimov, N. V.

    2009-12-01

    A generalization of the CABARET finite difference scheme is proposed for linearized one-dimensional Euler equations based on the characteristic decomposition into local Riemann invariants. The new method is compared with several central finite difference schemes that are widely used in computational aeroacoustics. Numerical results for the propagation of an acoustic wave in a homogeneous field and the refraction of this wave through a contact discontinuity obtained on a strongly nonuniform grid are presented.

  18. Solution of a General Linear Complementarity Problem Using Smooth Optimization and Its Application to Bilinear Programming and LCP

    SciTech Connect

    Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.

    2001-07-01

    This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.

  19. On the Energy Release Rate for Dynamic Transient Anti-Plane Shear Crack Propagation in a General Linear Viscoelastic Body

    DTIC Science & Technology

    1988-09-01

    properties.> Moreover, it is found that whether or not a failure zone is incorporated into the model si nif icantly influences both quantitatively and...Moreover, it is found that whether or not a failure zone is incorporated into the model significantly influences both quantitatively and...Hopf technique, Willis constructed the dynamic stress intensity factor (SIP) for a standard linear solid material model and general crack face

  20. Principal components and generalized linear modeling in the correlation between hospital admissions and air pollution

    PubMed Central

    de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição

    2014-01-01

    OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940

  1. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  2. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  3. SPARSKIT: A basic tool kit for sparse matrix computations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1990-01-01

    Presented here are the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate the exchange of software and data between researchers in sparse matrix computations. The starting point is the Harwell/Boeing collection of matrices for which the authors provide a number of tools. Among other things, the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, and performing linear algebra operations with sparse matrices.

  4. General solution of the diffusion equation with a nonlocal diffusive term and a linear force term.

    PubMed

    Malacarne, L C; Mendes, R S; Lenzi, E K; Lenzi, M K

    2006-10-01

    We obtain a formal solution for a large class of diffusion equations with a spatial kernel dependence in the diffusive term. The presence of this kernel represents a nonlocal dependence of the diffusive process and, by a suitable choice, it has the spatial fractional diffusion equations as a particular case. We also consider the presence of a linear external force and source terms. In addition, we show that a rich class of anomalous diffusion, e.g., the Lévy superdiffusion, can be obtained by an appropriated choice of kernel.

  5. Identification of general linear relationships between activation energies and enthalpy changes for dissociation reactions at surfaces.

    PubMed

    Michaelides, Angelos; Liu, Z-P; Zhang, C J; Alavi, Ali; King, David A; Hu, P

    2003-04-02

    The activation energy to reaction is a key quantity that controls catalytic activity. Having used ab inito calculations to determine an extensive and broad ranging set of activation energies and enthalpy changes for surface-catalyzed reactions, we show that linear relationships exist between dissociation activation energies and enthalpy changes. Known in the literature as empirical Brønsted-Evans-Polanyi (BEP) relationships, we identify and discuss the physical origin of their presence in heterogeneous catalysis. The key implication is that merely from knowledge of adsorption energies the barriers to catalytic elementary reaction steps can be estimated.

  6. Use of a generalized linear model to evaluate range forage production estimates

    NASA Astrophysics Data System (ADS)

    Mitchell, John E.; Joyce, Linda A.

    1986-05-01

    Interdisciplinary teams have been used in federal land planning and in the private sector to reach consensus on the environmental impact of management. When a large data base is constructed, verifiability of the accuracy of the coded estimates and the underlying assumptions becomes a problem. A mechanism is provided by the use of a linear statistical model to evaluate production coefficients in terms of errors in coding and underlying assumptions. The technique can be used to evaluate other intuitive models depicting natural resource production in relation to prescribed variables, such as site factors or secondary succession.

  7. A general algorithm for control problems with variable parameters and quasi-linear models

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2015-12-01

    This paper presents an algorithm that is able to solve optimal control problems in which the modelling of the system contains variable parameters, with the added complication that, in certain cases, these parameters can lead to control problems governed by quasi-linear equations. Combining the techniques of Pontryagin's Maximum Principle and the shooting method, an algorithm has been developed that is not affected by the values of the parameters, being able to solve conventional problems as well as cases in which the optimal solution is shown to be bang-bang with singular arcs.

  8. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  9. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  10. A substructure coupling procedure applicable to general linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Howsman, T. G.; Craig, R. R., Jr.

    1984-01-01

    A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.

  11. The Exact Solution for Linear Thermoelastic Axisymmetric Deformations of Generally Laminated Circular Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.; Schultz, Marc R.

    2012-01-01

    A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.

  12. General theory of spherically symmetric boundary-value problems of the linear transport theory.

    NASA Technical Reports Server (NTRS)

    Kanal, M.

    1972-01-01

    A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.

  13. Optimization of biochemical systems by linear programming and general mass action model representations.

    PubMed

    Marín-Sanguino, Alberto; Torres, Néstor V

    2003-08-01

    A new method is proposed for the optimization of biochemical systems. The method, based on the separation of the stoichiometric and kinetic aspects of the system, follows the general approach used in the previously presented indirect optimization method (IOM) developed within biochemical systems theory. It is called GMA-IOM because it makes use of the generalized mass action (GMA) as the model system representation form. The GMA representation avoids flux aggregation and thus prevents possible stoichiometric errors. The optimization of a system is used to illustrate and compare the features, advantages and shortcomings of both versions of the IOM method as a general strategy for designing improved microbial strains of biotechnological interest. Special attention has been paid to practical problems for the actual implementation of the new proposed strategy, such as the total protein content of the engineered strain or the deviation from the original steady state and its influence on cell viability.

  14. General theory of spherically symmetric boundary-value problems of the linear transport theory.

    NASA Technical Reports Server (NTRS)

    Kanal, M.

    1972-01-01

    A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.

  15. Sparse matrix methods based on orthogonality and conjugacy

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1973-01-01

    A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.

  16. Sparse Texture Active Contour

    PubMed Central

    Gao, Yi; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2014-01-01

    In image segmentation, we are often interested in using certain quantities to characterize the object, and perform the classification based on them: mean intensity, gradient magnitude, responses to certain predefined filters, etc. Unfortunately, in many cases such quantities are not adequate to model complex textured objects. Along a different line of research, the sparse characteristic of natural signals has been recognized and studied in recent years. Therefore, how such sparsity can be utilized, in a non-parametric way, to model the object texture and assist the textural image segmentation process is studied in this work, and a segmentation scheme based on the sparse representation of the texture information is proposed. More explicitly, the texture is encoded by the dictionaries constructed from the user initialization. Then, an active contour is evolved to optimize the fidelity of the representation provided by the dictionary of the target. In doing so, not only a non-parametric texture modeling technique is provided, but also the sparsity of the representation guarantees the computation efficiency. The experiments are carried out on the publicly available image data sets which contain a large variety of texture images, to analyze the user interaction, performance statistics, and to highlight the algorithm’s capability of robustly extracting textured regions from an image. PMID:23799695

  17. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    PubMed Central

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  18. Linear stability of plane Poiseuille flow over a generalized Stokes layer

    NASA Astrophysics Data System (ADS)

    Quadrio, Maurizio; Martinelli, Fulvio; Schmid, Peter J.

    2011-12-01

    Linear stability of plane Poiseuille flow subject to spanwise velocity forcing applied at the wall is studied. The forcing is stationary and sinusoidally distributed along the streamwise direction. The long-term aim of the study is to explore a possible relationship between the modification induced by the wall forcing to the stability characteristic of the unforced Poiseuille flow and the signifcant capabilities demonstrated by the same forcing in reducing turbulent friction drag. We present in this paper the statement of the mathematical problem, which is considerably more complex that the classic Orr-Sommerfeld-Squire approach, owing to the streamwise-varying boundary condition. We also report some preliminary results which, although not yet conclusive, describe the effects of the wall forcing on modal and non-modal characteristics of the flow stability.

  19. A generalized analog implementation of piecewise linear neuron models using CCII building blocks.

    PubMed

    Soleimani, Hamid; Ahmadi, Arash; Bavandpour, Mohammad; Sharifipoor, Ozra

    2014-03-01

    This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Robust conic generalized partial linear models using RCMARS method - A robustification of CGPLM

    NASA Astrophysics Data System (ADS)

    Özmen, Ayşe; Weber, Gerhard Wilhelm

    2012-11-01

    GPLM is a combination of two different regression models each of which is used to apply on different parts of the data set. It is also adequate to high dimensional, non-normal and nonlinear data sets having the flexibility to reflect all anomalies effectively. In our previous study, Conic GPLM (CGPLM) was introduced using CMARS and Logistic Regression. According to a comparison with CMARS, CGPLM gives better results. In this study, we include the existence of uncertainty in the future scenarios into CMARS and linear/logit regression part in CGPLM and robustify it with robust optimization which is dealt with data uncertainty. Moreover, we apply RCGPLM on a small data set as a numerical experience from the financial sector.

  1. Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob

    2007-01-01

    For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.

  2. Unsupervised analysis of polyphonic music by sparse coding.

    PubMed

    Abdallah, Samer A; Plumbley, Mark D

    2006-01-01

    We investigate a data-driven approach to the analysis and transcription of polyphonic music, using a probabilistic model which is able to find sparse linear decompositions of a sequence of short-term Fourier spectra. The resulting system represents each input spectrum as a weighted sum of a small number of "atomic" spectra chosen from a larger dictionary; this dictionary is, in turn, learned from the data in such a way as to represent the given training set in an (information theoretically) efficient way. When exposed to examples of polyphonic music, most of the dictionary elements take on the spectral characteristics of individual notes in the music, so that the sparse decomposition can be used to identify the notes in a polyphonic mixture. Our approach differs from other methods of polyphonic analysis based on spectral decomposition by combining all of the following: (a) a formulation in terms of an explicitly given probabilistic model, in which the process estimating which notes are present corresponds naturally with the inference of latent variables in the model; (b) a particularly simple generative model, motivated by very general considerations about efficient coding, that makes very few assumptions about the musical origins of the signals being processed; and (c) the ability to learn a dictionary of atomic spectra (most of which converge to harmonic spectral profiles associated with specific notes) from polyphonic examples alone-no separate training on monophonic examples is required.

  3. FIDDLE: A Computer Code for Finite Difference Development of Linear Elasticity in Generalized Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2005-01-01

    A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.

  4. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  5. Feature selection using sparse Bayesian inference

    NASA Astrophysics Data System (ADS)

    Brandes, T. Scott; Baxter, James R.; Woodworth, Jonathan

    2014-06-01

    A process for selecting a sparse subset of features that maximize discrimination between target classes is described in a Bayesian framework. Demonstrated on high range resolution radar (HRR) signature data, this has the effect of selecting the most informative range bins for a classification task. The sparse Bayesian classifier (SBC) model is directly compared against Fisher's linear discriminant analysis (LDA), showing a clear performance gain with the Bayesian framework using HRRs from the publicly available MSTAR data set. The discriminative power of the selected features from the SBC is shown to be particularly dominant over LDA when only a few features are selected or when there is a shift in training and testing data sets, as demonstrated by training on a specific target type and testing on a slightly different target type.

  6. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  7. A novel synchronization scheme with a simple linear control and guaranteed convergence time for generalized Lorenz chaotic systems.

    PubMed

    Chuang, Chun-Fu; Sun, Yeong-Jeu; Wang, Wen-June

    2012-12-01

    In this study, exponential finite-time synchronization for generalized Lorenz chaotic systems is investigated. The significant contribution of this paper is that master-slave synchronization is achieved within a pre-specified convergence time and with a simple linear control. The designed linear control consists of two parts: one achieves exponential synchronization, and the other realizes finite-time synchronization within a guaranteed convergence time. Furthermore, the control gain depends on the parameters of the exponential convergence rate, the finite-time convergence rate, the bound of the initial states of the master system, and the system parameter. In addition, the proposed approach can be directly and efficiently applied to secure communication. Finally, four numerical examples are provided to demonstrate the feasibility and correctness of the obtained results.

  8. Classical and Generalized Solutions of Time-Dependent Linear Differential Algebraic Equations

    DTIC Science & Technology

    1993-10-15

    matrix pencils, [G59]. The book [GrM86] also contains a treatment of the general system (1.1) utilizing a condition of "transferabilitv’" which...C(t) and N(t) are analytic functions of t and N(t) is nilpotent upper (or lower) triangular for all t E J. From the structure of N(t), it follows that...the operator Y(t)l7 n is nilpotent , so that (1.2b) has the unique solution z = E (-1)k(N(t)-)kg, and (1.2a) is k=1 it an explicit ODE. But no

  9. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  10. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  11. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  12. Sparse Exponential Family Principal Component Analysis.

    PubMed

    Lu, Meng; Huang, Jianhua Z; Qian, Xiaoning

    2016-12-01

    We propose a Sparse exponential family Principal Component Analysis (SePCA) method suitable for any type of data following exponential family distributions, to achieve simultaneous dimension reduction and variable selection for better interpretation of the results. Because of the generality of exponential family distributions, the method can be applied to a wide range of applications, in particular when analyzing high dimensional next-generation sequencing data and genetic mutation data in genomics. The use of sparsity-inducing penalty helps produce sparse principal component loading vectors such that the principal components can focus on informative variables. By using an equivalent dual form of the formulated optimization problem for SePCA, we derive optimal solutions with efficient iterative closed-form updating rules. The results from both simulation experiments and real-world applications have demonstrated the superiority of our SePCA in reconstruction accuracy and computational efficiency over traditional exponential family PCA (ePCA), the existing Sparse PCA (SPCA) and Sparse Logistic PCA (SLPCA) algorithms.

  13. Learning Stable Multilevel Dictionaries for Sparse Representations.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2015-09-01

    Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

  14. Wavelet-generalized least squares: a new BLU estimator of linear regression models with 1/f errors.

    PubMed

    Fadili, M J; Bullmore, E T

    2002-01-01

    Long-memory noise is common to many areas of signal processing and can seriously confound estimation of linear regression model parameters and their standard errors. Classical autoregressive moving average (ARMA) methods can adequately address the problem of linear time invariant, short-memory errors but may be inefficient and/or insufficient to secure type 1 error control in the context of fractal or scale invariant noise with a more slowly decaying autocorrelation function. Here we introduce a novel method, called wavelet-generalized least squares (WLS), which is (to a good approximation) the best linear unbiased (BLU) estimator of regression model parameters in the context of long-memory errors. The method also provides maximum likelihood (ML) estimates of the Hurst exponent (which can be readily translated to the fractal dimension or spectral exponent) characterizing the correlational structure of the errors, and the error variance. The algorithm exploits the whitening or Karhunen-Loéve-type property of the discrete wavelet transform to diagonalize the covariance matrix of the errors generated by an iterative fitting procedure after both data and design matrix have been transformed to the wavelet domain. Properties of this estimator, including its Cramèr-Rao bounds, are derived theoretically and compared to its empirical performance on a range of simulated data. Compared to ordinary least squares and ARMA-based estimators, WLS is shown to be more efficient and to give excellent type 1 error control. The method is also applied to some real (neurophysiological) data acquired by functional magnetic resonance imaging (fMRI) of the human brain. We conclude that wavelet-generalized least squares may be a generally useful estimator of regression models in data complicated by long-memory or fractal noise.

  15. Methodological Quality and Reporting of Generalized Linear Mixed Models in Clinical Medicine (2000–2012): A Systematic Review

    PubMed Central

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L.

    2014-01-01

    Background Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. Methods A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic “generalized linear mixed models”,“hierarchical generalized linear models”, “multilevel generalized linear model” and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. Results A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. Conclusions During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of

  16. A regularized point process generalized linear model for assessing the functional connectivity in the cat motor cortex.

    PubMed

    Chen, Zhe; Putrino, David F; Ba, Demba E; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N

    2009-01-01

    Identification of multiple simultaneously recorded neural spike train recordings is an important task in understanding neuronal dependency, functional connectivity, and temporal causality in neural systems. An assessment of the functional connectivity in a group of ensemble cells was performed using a regularized point process generalized linear model (GLM) that incorporates temporal smoothness or contiguity of the solution. An efficient convex optimization algorithm was then developed for the regularized solution. The point process model was applied to an ensemble of neurons recorded from the cat motor cortex during a skilled reaching task. The implications of this analysis to the coding of skilled movement in primary motor cortex is discussed.

  17. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  18. Generative models for discovering sparse distributed representations.

    PubMed

    Hinton, G E; Ghahramani, Z

    1997-08-29

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.

  19. Sparse coding with memristor networks.

    PubMed

    Sheridan, Patrick M; Cai, Fuxi; Du, Chao; Ma, Wen; Zhang, Zhengya; Lu, Wei D

    2017-08-01

    Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.

  20. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum

    PubMed Central

    Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638

  1. Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum.

    PubMed

    Wilson, Emma D; Assaf, Tareq; Pearson, Martin J; Rossiter, Jonathan M; Dean, Paul; Anderson, Sean R; Porrill, John

    2015-01-01

    The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks.

  2. A generalized linear mixed model for longitudinal binary data with a marginal logit link function

    PubMed Central

    Parzen, Michael; Ghosh, Souparno; Lipsitz, Stuart; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Mallick, Bani K.; Ibrahim, Joseph G.

    2010-01-01

    Summary Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis (2003) proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper, we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall’s τ. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women. PMID:21532998

  3. Sparseness Analysis in the Pretraining of Deep Neural Networks.

    PubMed

    Li, Jun; Zhang, Tong; Luo, Wei; Yang, Jian; Yuan, Xiao-Tong; Zhang, Jian

    2016-03-31

    A major progress in deep multilayer neural networks (DNNs) is the invention of various unsupervised pretraining methods to initialize network parameters which lead to good prediction accuracy. This paper presents the sparseness analysis on the hidden unit in the pretraining process. In particular, we use the L₁-norm to measure sparseness and provide some sufficient conditions for that pretraining leads to sparseness with respect to the popular pretraining models--such as denoising autoencoders (DAEs) and restricted Boltzmann machines (RBMs). Our experimental results demonstrate that when the sufficient conditions are satisfied, the pretraining models lead to sparseness. Our experiments also reveal that when using the sigmoid activation functions, pretraining plays an important sparseness role in DNNs with sigmoid (Dsigm), and when using the rectifier linear unit (ReLU) activation functions, pretraining becomes less effective for DNNs with ReLU (Drelu). Luckily, Drelu can reach a higher recognition accuracy than DNNs with pretraining (DAEs and RBMs), as it can capture the main benefit (such as sparseness-encouraging) of pretraining in Dsigm. However, ReLU is not adapted to the different firing rates in biological neurons, because the firing rate actually changes along with the varying membrane resistances. To address this problem, we further propose a family of rectifier piecewise linear units (RePLUs) to fit the different firing rates. The experimental results show that the performance of RePLU is better than ReLU, and is comparable with those with some pretraining techniques, such as RBMs and DAEs.

  4. Point particle binary system with components of different masses in the linear regime of the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Cedeño M, C. E.; de Araujo, J. C. N.

    2016-05-01

    A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.

  5. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    PubMed

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  6. Airfoil profiles for minimum pressure drag at supersonic velocities -- general analysis with application to linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Chapman, Dean R

    1952-01-01

    A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.

  7. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX*

    PubMed Central

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    2015-01-01

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801

  8. An algorithm for the construction of substitution box for block ciphers based on projective general linear group

    NASA Astrophysics Data System (ADS)

    Altaleb, Anas; Saeed, Muhammad Sarwar; Hussain, Iqtadar; Aslam, Muhammad

    2017-03-01

    The aim of this work is to synthesize 8*8 substitution boxes (S-boxes) for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28)) on Galois field GF(28). In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.

  9. A Hierarchical Generalized Linear Model in Combination with Dispersion Modeling to Improve Sib-Pair Linkage Analysis.

    PubMed

    Lee, Woojoo; Kim, Jeonghwan; Lee, Youngjo; Park, Taesung; Suh, Young Ju

    2015-01-01

    We explored a hierarchical generalized linear model (HGLM) in combination with dispersion modeling to improve the sib-pair linkage analysis based on the revised Haseman-Elston regression model for a quantitative trait. A dispersion modeling technique was investigated for sib-pair linkage analysis using simulation studies and real data applications. We considered 4 heterogeneous dispersion settings according to a signal-to-noise ratio (SNR) in the various statistical models based on the Haseman-Elston regression model. Our numerical studies demonstrated that susceptibility loci could be detected well by modeling the dispersion parameter appropriately. In particular, the HGLM had better performance than the linear regression model and the ordinary linear mixed model when the SNR is low, i.e., when substantial noise was present in the data. The study shows that the HGLM in combination with dispersion modeling can be utilized to identify multiple markers showing linkage to familial complex traits accurately. Appropriate dispersion modeling might be more powerful to identify markers closest to the major genes which determine a quantitative trait. © 2015 S. Karger AG, Basel.

  10. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    SciTech Connect

    Fowler, Michael James

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  11. The generalized cross-validation method applied to geophysical linear traveltime tomography

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Oliveira, N. P.

    2009-12-01

    The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.

  12. Joint sparse representation based automatic target recognition in SAR images

    NASA Astrophysics Data System (ADS)

    Zhang, Haichao; Nasrabadi, Nasser M.; Huang, Thomas S.; Zhang, Yanning

    2011-06-01

    In this paper, we introduce a novel joint sparse representation based automatic target recognition (ATR) method using multiple views, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views for a single joint recognition decision. We cast the problem as a multi-variate regression model and recover the sparse representations for the multiple views simultaneously. The recognition is accomplished via classifying the target to the class which gives the minimum total reconstruction error accumulated across all the views. Extensive experiments have been carried out on Moving and Stationary Target Acquisition and Recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear Support Vector Machine (SVM), kernel SVM as well as a sparse representation based classifier. Experimental results demonstrate that the effectiveness as well as robustness of the proposed joint sparse representation ATR method.

  13. Automatic anatomy recognition of sparse objects

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Udupa, Jayaram K.; Odhner, Dewey; Wang, Huiqian; Tong, Yubing; Torigian, Drew A.

    2015-03-01

    A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object's exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.

  14. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  15. Generalized Confidence Intervals for Intra- and Inter-subject Coefficients of Variation in Linear Mixed-effects Models.

    PubMed

    Forkman, Johannes

    2017-06-15

    Linear mixed-effects models are linear models with several variance components. Models with a single random-effects factor have two variance components: the random-effects variance, i. e., the inter-subject variance, and the residual error variance, i. e., the intra-subject variance. In many applications, it is practice to report variance components as coefficients of variation. The intra- and inter-subject coefficients of variation are the square roots of the corresponding variances divided by the mean. This article proposes methods for computing confidence intervals for intra- and inter-subject coefficients of variation using generalized pivotal quantities. The methods are illustrated through two examples. In the first example, precision is assessed within and between runs in a bioanalytical method validation. In the second example, variation is estimated within and between main plots in an agricultural split-plot experiment. Coverage of generalized confidence intervals is investigated through simulation and shown to be close to the nominal value.

  16. Application of a generalized linear mixed model to analyze mixture toxicity: survival of brown trout affected by copper and zinc.

    PubMed

    Iwasaki, Yuichi; Brinkman, Stephen F

    2015-04-01

    Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. © 2015 SETAC.

  17. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model.

    PubMed

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J

    2014-12-10

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.

  18. Instability and change detection in exponential families and generalized linear models, with a study of Atlantic tropical storms

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Chatterjee, S.

    2014-11-01

    Exponential family statistical distributions, including the well-known normal, binomial, Poisson, and exponential distributions, are overwhelmingly used in data analysis. In the presence of covariates, an exponential family distributional assumption for the response random variables results in a generalized linear model. However, it is rarely ensured that the parameters of the assumed distributions are stable through the entire duration of the data collection process. A failure of stability leads to nonsmoothness and nonlinearity in the physical processes that result in the data. In this paper, we propose testing for stability of parameters of exponential family distributions and generalized linear models. A rejection of the hypothesis of stable parameters leads to change detection. We derive the related likelihood ratio test statistic. We compare the performance of this test statistic to the popular normal distributional assumption dependent cumulative sum (Gaussian CUSUM) statistic in change detection problems. We study Atlantic tropical storms using the techniques developed here, so to understand whether the nature of these tropical storms has remained stable over the last few decades.

  19. Generalized Linear Mixed Models for Binary Data: Are Matching Results from Penalized Quasi-Likelihood and Numerical Integration Less Biased?

    PubMed Central

    Benedetti, Andrea; Platt, Robert; Atherton, Juli

    2014-01-01

    Background Over time, adaptive Gaussian Hermite quadrature (QUAD) has become the preferred method for estimating generalized linear mixed models with binary outcomes. However, penalized quasi-likelihood (PQL) is still used frequently. In this work, we systematically evaluated whether matching results from PQL and QUAD indicate less bias in estimated regression coefficients and variance parameters via simulation. Methods We performed a simulation study in which we varied the size of the data set, probability of the outcome, variance of the random effect, number of clusters and number of subjects per cluster, etc. We estimated bias in the regression coefficients, odds ratios and variance parameters as estimated via PQL and QUAD. We ascertained if similarity of estimated regression coefficients, odds ratios and variance parameters predicted less bias. Results Overall, we found that the absolute percent bias of the odds ratio estimated via PQL or QUAD increased as the PQL- and QUAD-estimated odds ratios became more discrepant, though results varied markedly depending on the characteristics of the dataset Conclusions Given how markedly results varied depending on data set characteristics, specifying a rule above which indicated biased results proved impossible. This work suggests that comparing results from generalized linear mixed models estimated via PQL and QUAD is a worthwhile exercise for regression coefficients and variance components obtained via QUAD, in situations where PQL is known to give reasonable results. PMID:24416249

  20. General expressions for R1ρ relaxation for N-site chemical exchange and the special case of linear chains

    NASA Astrophysics Data System (ADS)

    Koss, Hans; Rance, Mark; Palmer, Arthur G.

    2017-01-01

    Exploration of dynamic processes in proteins and nucleic acids by spin-locking NMR experiments has been facilitated by the development of theoretical expressions for the R1ρ relaxation rate constant covering a variety of kinetic situations. Herein, we present a generalized approximation to the chemical exchange, Rex, component of R1ρ for arbitrary kinetic schemes, assuming the presence of a dominant major site population, derived from the negative reciprocal trace of the inverse Bloch-McConnell evolution matrix. This approximation is equivalent to first-order truncation of the characteristic polynomial derived from the Bloch-McConnell evolution matrix. For three- and four-site chemical exchange, the first-order approximations are sufficient to distinguish different kinetic schemes. We also introduce an approach to calculate R1ρ for linear N-site schemes, using the matrix determinant lemma to reduce the corresponding 3N × 3N Bloch-McConnell evolution matrix to a 3 × 3 matrix. The first- and second order-expansions of the determinant of this 3 × 3 matrix are closely related to previously derived equations for two-site exchange. The second-order approximations for linear N-site schemes can be used to obtain more accurate approximations for non-linear N-site schemes, such as triangular three-site or star four-site topologies. The expressions presented herein provide powerful means for the estimation of Rex contributions for both low (CEST-limit) and high (R1ρ-limit) radiofrequency field strengths, provided that the population of one state is dominant. The general nature of the new expressions allows for consideration of complex kinetic situations in the analysis of NMR spin relaxation data.

  1. Solving inverse problems with piecewise linear estimators: from Gaussian mixture models to structured sparsity.

    PubMed

    Yu, Guoshen; Sapiro, Guillermo; Mallat, Stéphane

    2012-05-01

    A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.

  2. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  3. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  4. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  5. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  6. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  7. Robust Multi Sensor Classification via Jointly Sparse Representation

    DTIC Science & Technology

    2016-03-14

    model [3] when we developed the multi- sensor joint sparse representation fusion model in the presence of gross but sparse noise penalized by an `1...complementary features from multiple measurements, we incorporate different structures on the concatenated coefficient matrix A through the penalized function FS...sparsity structure that simultaneously penalize several sparsity levels in a combined cost function. In the most general form, our model searches for the

  8. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  9. Solutions for Determining the Significance Region Using the Johnson-Neyman Type Procedure in Generalized Linear (Mixed) Models.

    PubMed

    Lazar, Ann A; Zerbe, Gary O

    2011-12-01

    Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé's method, and uses a single statistical software package to determine the significance region.

  10. AN ALMOST LINEAR TIME ALGORITHM FOR A GENERAL HAPLOTYPE SOLUTION ON TREE PEDIGREES WITH NO RECOMBINATION AND ITS EXTENSIONS

    PubMed Central

    Li, Xin

    2010-01-01

    We study the haplotype inference problem from pedigree data under the zero recombination assumption, which is well supported by real data for tightly linked markers (i.e. single nucleotide polymorphisms (SNPs)) over a relatively large chromosome segment. We solve the problem in a rigorous mathematical manner by formulating genotype constraints as a linear system of inheritance variables. We then utilize disjoint-set structures to encode connectivity information among individuals, to detect constraints from genotypes, and to check consistency of constraints. On a tree pedigree without missing data, our algorithm can output a general solution as well as the number of total specific solutions in a nearly linear time O(mn · α(n)), where m is the number of loci, n is the number of individuals and α is the inverse Ackermann function, which is a further improvement over existing ones. We also extend the idea to looped pedigrees and pedigrees with missing data by considering existing (partial) constraints on inheritance variables. The algorithm has been implemented in C++ and will be incorporated into our PedPhase package. Experimental results show that it can correctly identify all 0-recombinant solutions with great efficiency. Comparisons with other two popular algorithms show that the proposed algorithm achieves 10 to 105-fold improvements over a variety of parameter settings. The experimental study also provides empirical evidences on the complexity bounds suggested by theoretical analysis. PMID:19507288

  11. A generalized electrostatic micro-mirror (GEM) model for a two-axis convex piecewise linear shaped MEMS mirror

    NASA Astrophysics Data System (ADS)

    Edwards, C. L.; Edwards, M. L.

    2009-05-01

    MEMS micro-mirror technology offers the opportunity to replace larger optical actuators with smaller, faster ones for lidar, network switching, and other beam steering applications. Recent developments in modeling and simulation of MEMS two-axis (tip-tilt) mirrors have resulted in closed-form solutions that are expressed in terms of physical, electrical and environmental parameters related to the MEMS device. The closed-form analytical expressions enable dynamic time-domain simulations without excessive computational overhead and are referred to as the Micro-mirror Pointing Model (MPM). Additionally, these first-principle models have been experimentally validated with in-situ static, dynamic, and stochastic measurements illustrating their reliability. These models have assumed that the mirror has a rectangular shape. Because the corners can limit the dynamic operation of a rectangular mirror, it is desirable to shape the mirror, e.g., mitering the corners. Presented in this paper is the formulation of a generalized electrostatic micromirror (GEM) model with an arbitrary convex piecewise linear shape that is readily implemented in MATLAB and SIMULINK for steady-state and dynamic simulations. Additionally, such a model permits an arbitrary shaped mirror to be approximated as a series of linearly tapered segments. Previously, "effective area" arguments were used to model a non-rectangular shaped mirror with an equivalent rectangular one. The GEM model shows the limitations of this approach and provides a pre-fabrication tool for designing mirror shapes.

  12. Correlated-imaging-based chosen plaintext attack on general cryptosystems composed of linear canonical transforms and phase encodings

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Liu, Wei; Liu, Zhengjun; Liu, Shutian

    2015-03-01

    We introduce a chosen-plaintext attack scheme on general optical cryptosystems that use linear canonical transform and phase encoding based on correlated imaging. The plaintexts are chosen as Gaussian random real number matrixes, and the corresponding ciphertexts are regarded as prior knowledge of the proposed attack method. To establish the reconstruct of the secret plaintext, correlated imaging is employed using the known resources. Differing from the reported attack methods, there is no need to decipher the distribution of the decryption key. The original secret image can be directly recovered by the attack in the absence of decryption key. In addition, the improved cryptosystems combined with pixel scrambling operations are also vulnerable to the proposed attack method. Necessary mathematical derivations and numerical simulations are carried out to demonstrate the validity of the proposed attack scheme.

  13. Parametric Variable Selection in Generalized Partially Linear Models with an Application to Assess Condom Use by HIV-infected Patients

    PubMed Central

    Leng, Chenlei; Liang, Hua; Martinson, Neil

    2011-01-01

    To study significant predictors of condom use in HIV-infected adults, we propose the use of generalized partially linear models and develop a variable selection procedure incorporating a least squares approximation. Local polynomial regression and spline smoothing techniques are used to estimate the baseline nonparametric function. The asymptotic normality of the resulting estimate is established. We further demonstrate that, with the proper choice of the penalty functions and the regularization parameter, the resulting estimate performs as well as an oracle procedure. Finite sample performance of the proposed inference procedure is assessed by Monte Carlo simulation studies. An application to assess condom use by HIV-infected patients gains some interesting results, which can not be obtained when an ordinary logistic model is used. PMID:21465515

  14. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.

  15. Variable selection in Bayesian generalized linear-mixed models: an illustration using candidate gene case-control association studies.

    PubMed

    Tsai, Miao-Yu

    2015-03-01

    The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Event-Triggered Schemes on Leader-Following Consensus of General Linear Multiagent Systems Under Different Topologies.

    PubMed

    Xu, Wenying; Ho, Daniel W C; Li, Lulu; Cao, Jinde

    2017-01-01

    This paper investigates the leader-following consensus for multiagent systems with general linear dynamics by means of event-triggered scheme (ETS). We propose three types of schemes, namely, distributed ETS (distributed-ETS), centralized ETS (centralized-ETS), and clustered ETS (clustered-ETS) for different network topologies. All these schemes guarantee that all followers can track the leader eventually. It should be emphasized that all event-triggered protocols in this paper depend on local information and their executions are distributed. Moreover, it is shown that such event-triggered mechanism can significantly reduce the frequency of control's update. Further, positive inner-event time intervals are assured for those cases of distributed-ETS, centralized-ETS, and clustered-ETS. In addition, two methods are proposed to avoid continuous communication between agents for event detection. Finally, numerical examples are provided to illustrate the effectiveness of the ETSs.

  17. Metrics of separation performance in chromatography: Part 3: General separation performance of linear solvent strength gradient liquid chromatography.

    PubMed

    Blumberg, Leonid M; Desmet, Gert

    2015-09-25

    The separation performance metrics defined in Part 1 of this series are applied to the evaluation of general separation performance of linear solvent strength (LSS) gradient LC. Among the evaluated metrics was the peak capacity of an arbitrary segment of a chromatogram. Also evaluated were the peak width, the separability of two solutes, the utilization of separability, and the speed of analysis-all at an arbitrary point of a chromatogram. The means are provided to express all these metrics as functions of an arbitrary time during LC analysis, as functions of an arbitrary outlet solvent strength changing during the analysis, as functions of parameters of the solutes eluting during the analysis, and as functions of several other factors. The separation performance of gradient LC is compared with the separation performance of temperature-programmed GC evaluated in Part 2.

  18. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  19. Simplified Linear Equation Solvers users manual

    SciTech Connect

    Gropp, W. ); Smith, B. )

    1993-02-01

    The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.

  20. Sparse Methods for Biomedical Data.

    PubMed

    Ye, Jieping; Liu, Jun

    2012-06-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the [Formula: see text] norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.

  1. Sparse Methods for Biomedical Data

    PubMed Central

    Ye, Jieping; Liu, Jun

    2013-01-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585

  2. Accounting for uncertainty in confounder and effect modifier selection when estimating average causal effects in generalized linear models.

    PubMed

    Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew

    2015-09-01

    Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.

  3. Effect of Smoothing in Generalized Linear Mixed Models on the Estimation of Covariance Parameters for Longitudinal Data.

    PubMed

    Mullah, Muhammad Abu Shadeque; Benedetti, Andrea

    2016-11-01

    Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.

  4. Optical double-image encryption and authentication by sparse representation.

    PubMed

    Mohammed, Emad A; Saadon, H L

    2016-12-10

    An optical double-image encryption and authentication method by sparse representation is proposed. The information from double-image encryption can be integrated into a sparse representation. Unlike the traditional double-image encryption technique, only sparse (partial) data from the encrypted data is adopted for the authentication process. Simulation results demonstrate that the correct authentication results are achieved even with partial information from the encrypted data. The randomly selected sparse encrypted information will be used as an effective key for a security system. Therefore, the proposed method is feasible, effective, and can provide an additional security layer for optical security systems. In addition, the method also achieved the general requirements of storage and transmission due to a high reduction of the encrypted information.

  5. Dictionary learning algorithms for sparse representation.

    PubMed

    Kreutz-Delgado, Kenneth; Murray, Joseph F; Rao, Bhaskar D; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J

    2003-02-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).

  6. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  7. Sparse representation of higher-order functional interaction patterns in task-based FMRI data.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Guo, Lei; Liu, Tianming

    2013-01-01

    Traditional task-based fMRI activation detection methods, e.g., the widely used general linear model (GLM), assume that the brain's hemodynamic responses follow the block-based or event-related stimulus paradigm. Typically, these activation detections are performed voxel-wise independently, and then are usually followed by statistical corrections. Despite remarkable successes and wide adoption of these methods, it remains largely unknown how functional brain regions interact with each other within specific networks during task performance blocks and in the baseline. In this paper, we present a novel algorithmic pipeline to statistically infer and sparsely represent higher-order functional interaction patterns within the working memory network during task performance and in the baseline. Specifically, a collection of higher-order interactions are inferred via the greedy equivalence search (GES) algorithm for both task and baseline blocks. In the next stage, an effective online dictionary learning algorithm is utilized for sparse representation of the inferred higher-order interaction patterns. Application of this framework on a working memory task-based fMRI data reveals interesting and meaningful distributions of the learned sparse dictionary atoms in task and baseline blocks. In comparison with traditional voxel-wise activation detection and recent pair-wise functional connectivity analysis, our framework offers a new methodology for representation and exploration of higher-order functional activities in the brain.

  8. The statistical performance of an MCF-7 cell culture assay evaluated using generalized linear mixed models and a score test.

    PubMed

    Rey deCastro, B; Neuberg, Donna

    2007-05-30

    Biological assays often utilize experimental designs where observations are replicated at multiple levels, and where each level represents a separate component of the assay's overall variance. Statistical analysis of such data usually ignores these design effects, whereas more sophisticated methods would improve the statistical power of assays. This report evaluates the statistical performance of an in vitro MCF-7 cell proliferation assay (E-SCREEN) by identifying the optimal generalized linear mixed model (GLMM) that accurately represents the assay's experimental design and variance components. Our statistical assessment found that 17beta-oestradiol cell culture assay data were best modelled with a GLMM configured with a reciprocal link function, a gamma error distribution, and three sources of design variation: plate-to-plate; well-to-well, and the interaction between plate-to-plate variation and dose. The gamma-distributed random error of the assay was estimated to have a coefficient of variation (COV) = 3.2 per cent, and a variance component score test described by X. Lin found that each of the three variance components were statistically significant. The optimal GLMM also confirmed the estrogenicity of five weakly oestrogenic polychlorinated biphenyls (PCBs 17, 49, 66, 74, and 128). Based on information criteria, the optimal gamma GLMM consistently out-performed equivalent naive normal and log-normal linear models, both with and without random effects terms. Because the gamma GLMM was by far the best model on conceptual and empirical grounds, and requires only trivially more effort to use, we encourage its use and suggest that naive models be avoided when possible. Copyright 2006 John Wiley & Sons, Ltd.

  9. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  10. Use of reflectance spectrophotometry and colorimetry in a general linear model for the determination of the age of bruises.

    PubMed

    Hughes, Vanessa K; Langlois, Neil E I

    2010-12-01

    Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This

  11. M-estimation for robust sparse unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Toomik, Maria; Lu, Shijian; Nelson, James D. B.

    2016-10-01

    Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.

  12. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging

    PubMed Central

    Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.

    2017-01-01

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862

  13. Ordering sparse matrices for cache-based systems

    SciTech Connect

    Biswas, Rupak; Oliker, Leonid

    2001-01-11

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning.

  14. Sparse representation for classification of dolphin whistles by type.

    PubMed

    Esfahanian, M; Zhuang, H; Erdol, N

    2014-07-01

    A compressive-sensing approach called Sparse Representation Classifier (SRC) is applied to the classification of bottlenose dolphin whistles by type. The SRC algorithm constructs a dictionary of whistles from the collection of training whistles. In the classification phase, an unknown whistle is represented sparsely by a linear combination of the training whistles and then the call class can be determined with an l1-norm optimization procedure. Experimental studies conducted in this research reveal the advantages and limitations of the proposed method against some existing techniques such as K-Nearest Neighbors and Support Vector Machines in distinguishing different vocalizations.

  15. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  16. A Community Needs Index for Adolescent Pregnancy Prevention Program Planning: Application of Spatial Generalized Linear Mixed Models.

    PubMed

    Johnson, Glen D; Mesler, Kristine; Kacica, Marilyn A

    2017-02-06

    Objective The objective is to estimate community needs with respect to risky adolescent sexual behavior in a way that is risk-adjusted for multiple community factors. Methods Generalized linear mixed modeling was applied for estimating teen pregnancy and sexually transmitted disease (STD) incidence by postal ZIP code in New York State, in a way that adjusts for other community covariables and residual spatial autocorrelation. A community needs index was then obtained by summing the risk-adjusted estimates of pregnancy and STD cases. Results Poisson regression with a spatial random effect was chosen among competing modeling approaches. Both the risk-adjusted caseloads and rates were computed for ZIP codes, which allowed risk-based prioritization to help guide funding decisions for a comprehensive adolescent pregnancy prevention program. Conclusions This approach provides quantitative evidence of community needs with respect to risky adolescent sexual behavior, while adjusting for other community-level variables and stabilizing estimates in areas with small populations. Therefore, it was well accepted by the affected groups and proved valuable for program planning. This methodology may also prove valuable for follow up program evaluation. Current research is directed towards further improving the statistical modeling approach and applying to different health and behavioral outcomes, along with different predictor variables.

  17. General characterization of Tityus fasciolatus scorpion venom. Molecular identification of toxins and localization of linear B-cell epitopes.

    PubMed

    Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C

    2015-06-01

    This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Depth-compensated diffuse optical tomography enhanced by general linear model analysis and an anatomical atlas of human head.

    PubMed

    Tian, Fenghua; Liu, Hanli

    2014-01-15

    One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts.

  19. Assessing intervention efficacy on high-risk drinkers using generalized linear mixed models with a new class of link functions.

    PubMed

    Prates, Marcos O; Aseltine, Robert H; Dey, Dipak K; Yan, Jun

    2013-11-01

    Unhealthy alcohol use is one of the leading causes of morbidity and mortality in the United States. Brief interventions with high-risk drinkers during an emergency department (ED) visit are of great interest due to their possible efficacy and low cost. In a collaborative study with patients recruited at 14 academic ED across the United States, we examined the self-reported number of drinks per week by each patient following the exposure to a brief intervention. Count data with overdispersion have been mostly analyzed with generalized linear mixed models (GLMMs), of which only a limited number of link functions are available. Different choices of link function provide different fit and predictive power for a particular dataset. We propose a class of link functions from an alternative way to incorporate random effects in a GLMM, which encompasses many existing link functions as special cases. The methodology is naturally implemented in a Bayesian framework, with competing links selected with Bayesian model selection criteria such as the conditional predictive ordinate (CPO). In application to the ED intervention study, all models suggest that the intervention was effective in reducing the number of drinks, but some new models are found to significantly outperform the traditional model as measured by CPO. The validity of CPO in link selection is confirmed in a simulation study that shared the same characteristics as the count data from high-risk drinkers. The dataset and the source code for the best fitting model are available in Supporting Information.

  20. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.

  1. SNP_NLMM: A SAS Macro to Implement a Flexible Random Effects Density for Generalized Linear and Nonlinear Mixed Models

    PubMed Central

    Vock, David M.; Davidian, Marie; Tsiatis, Anastasios A.

    2014-01-01

    Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453

  2. Acute toxicity of ammonia (NH3-N) in sewage effluent to Chironomus riparius: II. Using a generalized linear model

    USGS Publications Warehouse

    Monda, D.P.; Galat, D.L.; Finger, S.E.; Kaiser, M.S.

    1995-01-01

    Toxicity of un-ionized ammonia (NH3-N) to the midge, Chironomus riparius was compared, using laboratory culture (well) water and sewage effluent (≈0.4 mg/L NH3-N) in two 96-h, static-renewal toxicity experiments. A generalized linear model was used for data analysis. For the first and second experiments, respectively, LC50 values were 9.4 mg/L (Test 1A) and 6.6 mg/L (Test 2A) for ammonia in well water, and 7.8 mg/L (Test 1B) and 4.1 mg/L (Test 2B) for ammonia in sewage effluent. Slopes of dose-response curves for Tests 1A and 2A were equal, but mortality occurred at lower NH3-N concentrations in Test 2A (unequal intercepts). Response ofC. riparius to NH3 in effluent was not consistent; dose-response curves for tests 1B and 2B differed in slope and intercept. Nevertheless, C. riparius was more sensitive to ammonia in effluent than in well water in both experiments, indicating a synergistic effect of ammonia in sewage effluent. These results demonstrate the advantages of analyzing the organisms entire range of response, as opposed to generating LC50 values, which represent only one point on the dose-response curve.

  3. A Sequence Kernel Association Test for Dichotomous Traits in Family Samples under a Generalized Linear Mixed Model.

    PubMed

    Yan, Qi; Tiwari, Hemant K; Yi, Nengjun; Gao, Guimin; Zhang, Kui; Lin, Wan-Yu; Lou, Xiang-Yang; Cui, Xiangqin; Liu, Nianjun

    2015-01-01

    The existing methods for identifying multiple rare variants underlying complex diseases in family samples are underpowered. Therefore, we aim to develop a new set-based method for an association study of dichotomous traits in family samples. We introduce a framework for testing the association of genetic variants with diseases in family samples based on a generalized linear mixed model. Our proposed method is based on a kernel machine regression and can be viewed as an extension of the sequence kernel association test (SKAT and famSKAT) for application to family data with dichotomous traits (F-SKAT). Our simulation studies show that the original SKAT has inflated type I error rates when applied directly to family data. By contrast, our proposed F-SKAT has the correct type I error rate. Furthermore, in all of the considered scenarios, F-SKAT, which uses all family data, has higher power than both SKAT, which uses only unrelated individuals from the family data, and another method, which uses all family data. We propose a set-based association test that can be used to analyze family data with dichotomous phenotypes while handling genetic variants with the same or opposite directions of effects as well as any types of family relationships. © 2015 S. Karger AG, Basel.

  4. Projected changes in precipitation and temperature over the Canadian Prairie Provinces using the Generalized Linear Model statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.

    2016-08-01

    In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.

  5. Complex-number representation of informed basis functions in general linear modeling of Functional Magnetic Resonance Imaging.

    PubMed

    Wang, Pengwei; Wang, Zhishun; He, Lianghua

    2012-03-30

    Functional Magnetic Resonance Imaging (fMRI), measuring Blood Oxygen Level-Dependent (BOLD), is a widely used tool to reveal spatiotemporal pattern of neural activity in human brain. Standard analysis of fMRI data relies on a general linear model and the model is constructed by convolving the task stimuli with a hypothesized hemodynamic response function (HRF). To capture possible phase shifts in the observed BOLD response, the informed basis functions including canonical HRF and its temporal derivative, have been proposed to extend the hypothesized hemodynamic response in order to obtain a good fitting model. Different t contrasts are constructed from the estimated model parameters for detecting the neural activity between different task conditions. However, the estimated model parameters corresponding to the orthogonal basis functions have different physical meanings. It remains unclear how to combine the neural features detected by the two basis functions and construct t contrasts for further analyses. In this paper, we have proposed a novel method for representing multiple basis functions in complex domain to model the task-driven fMRI data. Using this method, we can treat each pair of model parameters, corresponding respectively to canonical HRF and its temporal derivative, as one complex number for each task condition. Using the specific rule we have defined, we can conveniently perform arithmetical operations on the estimated model parameters and generate different t contrasts. We validate this method using the fMRI data acquired from twenty-two healthy participants who underwent an auditory stimulation task.

  6. Towards obtaining spatiotemporally precise responses to continuous sensory stimuli in humans: a general linear modeling approach to EEG.

    PubMed

    Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C

    2014-08-15

    Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.

  7. Multisite multivariate modeling of daily precipitation and temperature in the Canadian Prairie Provinces using generalized linear models

    NASA Astrophysics Data System (ADS)

    Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.

    2016-11-01

    Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.

  8. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  9. Generalized linear solvation energy model applied to solute partition coefficients in ionic liquid-supercritical carbon dioxide systems.

    PubMed

    Planeta, Josef; Karásek, Pavel; Hohnová, Barbora; Sťavíková, Lenka; Roth, Michal

    2012-08-10

    Biphasic solvent systems composed of an ionic liquid (IL) and supercritical carbon dioxide (scCO(2)) have become frequented in synthesis, extractions and electrochemistry. In the design of related applications, information on interphase partitioning of the target organics is essential, and the infinite-dilution partition coefficients of the organic solutes in IL-scCO(2) systems can conveniently be obtained by supercritical fluid chromatography. The data base of experimental partition coefficients obtained previously in this laboratory has been employed to test a generalized predictive model for the solute partition coefficients. The model is an amended version of that described before by Hiraga et al. (J. Supercrit. Fluids, in press). Because of difficulty of the problem to be modeled, the model involves several different concepts - linear solvation energy relationships, density-dependent solvent power of scCO(2), regular solution theory, and the Flory-Huggins theory of athermal solutions. The model shows a moderate success in correlating the infinite-dilution solute partition coefficients (K-factors) in individual IL-scCO(2) systems at varying temperature and pressure. However, larger K-factor data sets involving multiple IL-scCO(2) systems appear to be beyond reach of the model, especially when the ILs involved pertain to different cation classes.

  10. Misconceptions in the use of the General Linear Model applied to functional MRI: a tutorial for junior neuro-imagers

    PubMed Central

    Pernet, Cyril R.

    2014-01-01

    This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622

  11. Age- and region-specific hepatitis B prevalence in Turkey estimated using generalized linear mixed models: a systematic review

    PubMed Central

    2011-01-01

    Background To provide a clear picture of the current hepatitis B situation, the authors performed a systematic review to estimate the age- and region-specific prevalence of chronic hepatitis B (CHB) in Turkey. Methods A total of 339 studies with original data on the prevalence of hepatitis B surface antigen (HBsAg) in Turkey and published between 1999 and 2009 were identified through a search of electronic databases, by reviewing citations, and by writing to authors. After a critical assessment, the authors included 129 studies, divided into categories: 'age-specific'; 'region-specific'; and 'specific population group'. To account for the differences among the studies, a generalized linear mixed model was used to estimate the overall prevalence across all age groups and regions. For specific population groups, the authors calculated the weighted mean prevalence. Results The estimated overall population prevalence was 4.57, 95% confidence interval (CI): 3.58, 5.76, and the estimated total number of CHB cases was about 3.3 million. The outcomes of the age-specific groups varied from 2.84, (95% CI: 2.60, 3.10) for the 0-14-yearolds to 6.36 (95% CI: 5.83, 6.90) in the 25-34-year-old group. Conclusion There are large age-group and regional differences in CHB prevalence in Turkey, where CHB remains a serious health problem. PMID:22151620

  12. Sample size calculation based on generalized linear models for differential expression analysis in RNA-seq data.

    PubMed

    Li, Chung-I; Shyr, Yu

    2016-12-01

    As RNA-seq rapidly develops and costs continually decrease, the quantity and frequency of samples being sequenced will grow exponentially. With proteomic investigations becoming more multivariate and quantitative, determining a study's optimal sample size is now a vital step in experimental design. Current methods for calculating a study's required sample size are mostly based on the hypothesis testing framework, which assumes each gene count can be modeled through Poisson or negative binomial distributions; however, these methods are limited when it comes to accommodating covariates. To address this limitation, we propose an estimating procedure based on the generalized linear model. This easy-to-use method constructs a representative exemplary dataset and estimates the conditional power, all without requiring complicated mathematical approximations or formulas. Even more attractive, the downstream analysis can be performed with current R/Bioconductor packages. To demonstrate the practicability and efficiency of this method, we apply it to three real-world studies, and introduce our on-line calculator developed to determine the optimal sample size for a RNA-seq study.

  13. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    PubMed

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  14. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets

    PubMed Central

    Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.

    2016-01-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  15. The overlooked potential of generalized linear models in astronomy - III. Bayesian negative binomial regression and globular cluster populations

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.

    2015-10-01

    In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.

  16. Hierarchical multivariate mixture generalized linear models for the analysis of spatial data: An application to disease mapping.

    PubMed

    Torabi, Mahmoud

    2016-09-01

    Disease mapping of a single disease has been widely studied in the public health setup. Simultaneous modeling of related diseases can also be a valuable tool both from the epidemiological and from the statistical point of view. In particular, when we have several measurements recorded at each spatial location, we need to consider multivariate models in order to handle the dependence among the multivariate components as well as the spatial dependence between locations. It is then customary to use multivariate spatial models assuming the same distribution through the entire population density. However, in many circumstances, it is a very strong assumption to have the same distribution for all the areas of population density. To overcome this issue, we propose a hierarchical multivariate mixture generalized linear model to simultaneously analyze spatial Normal and non-Normal outcomes. As an application of our proposed approach, esophageal and lung cancer deaths in Minnesota are used to show the outperformance of assuming different distributions for different counties of Minnesota rather than assuming a single distribution for the population density. Performance of the proposed approach is also evaluated through a simulation study. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison

  18. SparseMaps--A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory.

    PubMed

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F; Neese, Frank

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison

  19. Sparse and powerful cortical spikes.

    PubMed

    Wolfe, Jason; Houweling, Arthur R; Brecht, Michael

    2010-06-01

    Activity in cortical networks is heterogeneous, sparse and often precisely timed. The functional significance of sparseness and precise spike timing is debated, but our understanding of the developmental and synaptic mechanisms that shape neuronal discharge patterns has improved. Evidence for highly specialized, selective and abstract cortical response properties is accumulating. Singe-cell stimulation experiments demonstrate a high sensitivity of cortical networks to the action potentials of some, but not all, single neurons. It is unclear how this sensitivity of cortical networks to small perturbations comes about and whether it is a generic property of cortex. The unforeseen sensitivity to cortical spikes puts serious constraints on the nature of neural coding schemes.

  20. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.