Sample records for singular vector decomposition

  1. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  2. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  3. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  4. An Efficient and Robust Singular Value Method for Star Pattern Recognition and Attitude Determination

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.

    2003-01-01

    A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.

  5. A Systolic Architecture for Singular Value Decomposition,

    DTIC Science & Technology

    1983-01-01

    Presented at the 1 st International Colloquium on Vector and Parallel Computing in Scientific Applications, Paris, March 191J Contract N00014-82-K.0703...Gene Golub. Private comunication . given inputs x and n 2 , compute 2 2 2 2 /6/ G. H. Golub and F. T. Luk : "Singular Value I + X1 Decomposition

  6. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  7. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  8. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  9. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  10. Statistical Analysis of the Ionosphere based on Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Arikan, Feza; Necat Deviren, M.; Toker, Cenk

    2016-07-01

    Ionosphere is made up of a spatio-temporally varying trend structure and secondary variations due to solar, geomagnetic, gravitational and seismic activities. Hence, it is important to monitor the ionosphere and acquire up-to-date information about its state in order both to better understand the physical phenomena that cause the variability and also to predict the effect of the ionosphere on HF and satellite communications, and satellite-based positioning systems. To charaterise the behaviour of the ionosphere, we propose to apply Singular Value Decomposition (SVD) to Total Electron Content (TEC) maps obtained from the TNPGN-Active (Turkish National Permanent GPS Network) CORS network. TNPGN-Active network consists of 146 GNSS receivers spread over Turkey. IONOLAB-TEC values estimated from each station are spatio-temporally interpolated using a Universal Kriging based algorithm with linear trend, namely IONOLAB-MAP, with very high spatial resolution. It is observed that the dominant singular value of TEC maps is an indicator of the trend structure of the ionosphere. The diurnal, seasonal and annual variability of the most dominant value is the representation of solar effect on ionosphere in midlatitude range. Secondary and smaller singular values are indicators of secondary variation which can have significance especially during geomagnetic storms or seismic disturbances. The dominant singular values are related to the physical basis vectors where ionosphere can be fully reconstructed using these vectors. Therefore, the proposed method can be used both for the monitoring of the current state of a region and also for the prediction and tracking of future states of ionosphere using singular values and singular basis vectors. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  11. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  12. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  13. Normal forms of Hopf-zero singularity

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  14. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  15. Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-02-03

    and tae have the same right singular vectors , and their singular-value decompositions can be written as tab = uabsabv †, (30) tae = uaesaev †, (31...freedom such as polarization or spatial modes), making its implementation ideal for fiber optics networks. (iii) The protocol promises unprecedented...well as temporal correlations. In particular, using 8 wavelength channels for an additional 3 bpp and two polarization states for one additional bpp

  16. Glove-based approach to online signature verification.

    PubMed

    Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A

    2008-06-01

    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.

  17. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  18. A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.

    PubMed

    Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei

    2015-11-01

    A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.

  19. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  20. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  1. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  2. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  3. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  4. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  5. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  6. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  7. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    PubMed Central

    2010-01-01

    Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480

  8. On the use of the singular value decomposition for text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husbands, P.; Simon, H.D.; Ding, C.

    2000-12-04

    The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large documentmore » collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.« less

  9. Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  11. Understanding Singular Vectors

    ERIC Educational Resources Information Center

    James, David; Botteron, Cynthia

    2013-01-01

    matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…

  12. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  13. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  14. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier.

    PubMed

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-11-10

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.

  15. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier

    PubMed Central

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-01-01

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods. PMID:27834902

  16. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  17. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  18. Operational modal analysis using SVD of power spectral density transmissibility matrices

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Laier, Jose Elias

    2014-05-01

    This paper proposes the singular value decomposition of power spectrum density transmissibility matrices with different references, (PSDTM-SVD), as an identification method of natural frequencies and mode shapes of a dynamic system subjected to excitations under operational conditions. At the system poles, the rows of the proposed transmissibility matrix converge to the same ratio of amplitudes of vibration modes. As a result, the matrices are linearly dependent on the columns, and their singular values converge to zero. Singular values are used to determine the natural frequencies, and the first left singular vectors are used to estimate mode shapes. A numerical example of the finite element model of a beam subjected to colored noise excitation is analyzed to illustrate the accuracy of the proposed method. Results of the PSDTM-SVD method in the numerical example are compared with obtained using frequency domain decomposition (FDD) and power spectrum density transmissibility (PSDT). It is demonstrated that the proposed method does not depend on the excitation characteristics contrary to the FDD method that assumes white noise excitation, and further reduces the risk to identify extra non-physical poles in comparison to the PSDT method. Furthermore, a case study is performed using data from an operational vibration test of a bridge with a simply supported beam system. The real application of a full-sized bridge has shown that the proposed PSDTM-SVD method is able to identify the operational modal parameter. Operational modal parameters identified by the PSDTM-SVD in the real application agree well those identified by the FDD and PSDT methods.

  19. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  20. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  1. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  2. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms

    PubMed Central

    Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly

    2011-01-01

    The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090

  3. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  4. How long the singular value decomposed entropy predicts the stock market? - Evidence from the Dow Jones Industrial Average Index

    NASA Astrophysics Data System (ADS)

    Gu, Rongbao; Shao, Yanmin

    2016-07-01

    In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.

  5. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  6. Amino acid "little Big Bang": representing amino acid substitution matrices as dot products of Euclidian vectors.

    PubMed

    Zimmermann, Karel; Gibrat, Jean-François

    2010-01-04

    Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  7. MGRA: Motion Gesture Recognition via Accelerometer.

    PubMed

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  8. Managing focal fields of vector beams with multiple polarization singularities.

    PubMed

    Han, Lei; Liu, Sheng; Li, Peng; Zhang, Yi; Cheng, Huachao; Gan, Xuetao; Zhao, Jianlin

    2016-11-10

    We explore the tight focusing behavior of vector beams with multiple polarization singularities, and analyze the influences of the number, position, and topological charge of the singularities on the focal fields. It is found that the ellipticity of the local polarization states at the focal plane could be determined by the spatial distribution of the polarization singularities of the vector beam. When the spatial location and topological charge of singularities have even-fold rotation symmetry, the transverse fields at the focal plane are locally linearly polarized. Otherwise, the polarization state becomes a locally hybrid one. By appropriately arranging the distribution of the polarization singularities in the vector beam, the polarization distributions of the focal fields could be altered while the intensity maintains unchanged.

  9. Singular value description of a digital radiographic detector: Theory and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyprianou, Iacovos S.; Badano, Aldo; Gallas, Brandon D.

    The H operator represents the deterministic performance of any imaging system. For a linear, digital imaging system, this system operator can be written in terms of a matrix, H, that describes the deterministic response of the system to a set of point objects. A singular value decomposition of this matrix results in a set of orthogonal functions (singular vectors) that form the system basis. A linear combination of these vectors completely describes the transfer of objects through the linear system, where the respective singular values associated with each singular vector describe the magnitude with which that contribution to the objectmore » is transferred through the system. This paper is focused on the measurement, analysis, and interpretation of the H matrix for digital x-ray detectors. A key ingredient in the measurement of the H matrix is the detector response to a single x ray (or infinitestimal x-ray beam). The authors have developed a method to estimate the 2D detector shift-variant, asymmetric ray response function (RRF) from multiple measured line response functions (LRFs) using a modified edge technique. The RRF measurements cover a range of x-ray incident angles from 0 deg. (equivalent location at the detector center) to 30 deg. (equivalent location at the detector edge) for a standard radiographic or cone-beam CT geometric setup. To demonstrate the method, three beam qualities were tested using the inherent, Lu/Er, and Yb beam filtration. The authors show that measures using the LRF, derived from an edge measurement, underestimate the system's performance when compared with the H matrix derived using the RRF. Furthermore, the authors show that edge measurements must be performed at multiple directions in order to capture rotational asymmetries of the RRF. The authors interpret the results of the H matrix SVD and provide correlations with the familiar MTF methodology. Discussion is made about the benefits of the H matrix technique with regards to signal detection theory, and the characterization of shift-variant imaging systems.« less

  10. Binary black hole spacetimes with a helical Killing vector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Christian

    Binary black hole spacetimes with a helical Killing vector, which are discussed as an approximation for the early stage of a binary system, are studied in a projection formalism. In this setting the four-dimensional Einstein equations are equivalent to a three-dimensional gravitational theory with a SL(2,R)/SO(1,1) sigma model as the material source. The sigma model is determined by a complex Ernst equation. 2+1 decompositions of the three-metric are used to establish the field equations on the orbit space of the Killing vector. The two Killing horizons of spherical topology which characterize the black holes, the cylinder of light where themore » Killing vector changes from timelike to spacelike, and infinity are singular points of the equations. The horizon and the light cylinder are shown to be regular singularities, i.e., the metric functions can be expanded in a formal power series in the vicinity. The behavior of the metric at spatial infinity is studied in terms of formal series solutions to the linearized Einstein equations. It is shown that the spacetime is not asymptotically flat in the strong sense to have a smooth null infinity under the assumption that the metric tends asymptotically to the Minkowski metric. In this case the metric functions have an oscillatory behavior in the radial coordinate in a nonaxisymmetric setting, the asymptotic multipoles are not defined. The asymptotic behavior of the Weyl tensor near infinity shows that there is no smooth null infinity.« less

  11. Investigations on the hierarchy of reference frames in geodesy and geodynamics

    NASA Technical Reports Server (NTRS)

    Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.

    1979-01-01

    Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).

  12. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  13. Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.

  14. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  15. Decomposition of the Multistatic Response Matrix and Target Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambers, D H

    2008-02-14

    Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less

  16. Polar and singular value decomposition of 3×3 magic squares

    NASA Astrophysics Data System (ADS)

    Trenkler, Götz; Schmidt, Karsten; Trenkler, Dietrich

    2013-07-01

    In this note, we find polar as well as singular value decompositions of a 3×3 magic square, i.e. a 3×3 matrix M with real elements where each row, column and diagonal adds up to the magic sum s of the magic square.

  17. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  18. Correlation between topological structure and its properties in dynamic singular vector fields.

    PubMed

    Vasilev, Vasyl; Soskin, Marat

    2016-04-20

    A new technique for establishment of topology measurements for static and dynamic singular vector fields is elaborated. It is based on precise measurement of the 3D landscape of ellipticity distribution for a checked singular optical field with C points on the tops of ellipticity hills. Vector fields possess three-component topology: areas with right-hand (RH) and left-hand (LH) ellipses, and delimiting those L lines as the singularities of handedness. The azimuth map of polarization ellipses is common for both RH and LH ellipses of vector fields and do not feel L lines. The strict rules were confirmed experimentally, which define the connection between the sign of underlying optical vortices and morphological parameters of upper-lying C points. Percolation phenomena explain their realization in-between singular vector fields and long duration of their chains of 103  s order.

  19. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  20. Object detection with a multistatic array using singular value decomposition

    DOEpatents

    Hallquist, Aaron T.; Chambers, David H.

    2014-07-01

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.

  1. A robust watermarking scheme using lifting wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh

    2017-01-01

    The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.

  2. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  3. Systematic Constraint Selection Strategy for Rate-Controlled Constrained-Equilibrium Modeling of Complex Nonequilibrium Chemical Kinetics

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad

    2018-04-01

    Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.

  4. Singular vectors for the WN algebras

    NASA Astrophysics Data System (ADS)

    Ridout, David; Siu, Steve; Wood, Simon

    2018-03-01

    In this paper, we use free field realisations of the A-type principal, or Casimir, WN algebras to derive explicit formulae for singular vectors in Fock modules. These singular vectors are constructed by applying screening operators to Fock module highest weight vectors. The action of the screening operators is then explicitly evaluated in terms of Jack symmetric functions and their skew analogues. The resulting formulae depend on sequences of pairs of integers that completely determine the Fock module as well as the Jack symmetric functions.

  5. Using Singular Value Decomposition to Investigate Degraded Chinese Character Recognition: Evidence from Eye Movements during Reading

    ERIC Educational Resources Information Center

    Wang, Hsueh-Cheng; Schotter, Elizabeth R.; Angele, Bernhard; Yang, Jinmian; Simovici, Dan; Pomplun, Marc; Rayner, Keith

    2013-01-01

    Previous research indicates that removing initial strokes from Chinese characters makes them harder to read than removing final or internal ones. In the present study, we examined the contribution of important components to character configuration via singular value decomposition. The results indicated that when the least important segments, which…

  6. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  7. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  8. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  9. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  10. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  11. The effect of receiver coil orientations on the imaging performance of magnetic induction tomography

    NASA Astrophysics Data System (ADS)

    Gürsoy, D.; Scharfetter, H.

    2009-10-01

    Magnetic induction tomography is an imaging modality which aims to reconstruct the conductivity distribution of the human body. It uses magnetic induction to excite the body and an array of sensor coils to detect the perturbations in the magnetic field. Up to now, much effort has been expended with the aim of finding an efficient coil configuration to extend the dynamic range of the measured signal. However, the merits of different sensor orientations on the imaging performance have not been studied in great detail so far. Therefore, the aim of the study is to fill the void of a systematic investigation of coil orientations on the reconstruction quality of the designs. To this end, a number of alternative receiver array designs with different coil orientations were suggested and the evaluations of the designs were performed based on the singular value decomposition. A generalized class of quality measures, the subclasses of which are linked to both the spatial resolution and uncertainty measures, was used to assess the performance on the radial and axial axes of a cylindrical phantom. The detectability of local conductivity perturbations in the phantom was explored using the reconstructed images. It is possible to draw the conclusion that the proper choice of the coil orientations significantly influences the number of usable singular vectors and accordingly the stability of image reconstruction, although the effect of increased stability on the quality of the reconstructed images was not of paramount importance due to the reduced independent information content of the associated singular vectors.

  12. A new feature constituting approach to detection of vocal fold pathology

    NASA Astrophysics Data System (ADS)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  13. Spatial Distribution of Phase Singularities in Optical Random Vector Waves.

    PubMed

    De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L

    2016-08-26

    Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.

  14. Characterization of agricultural land using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Herries, Graham M.; Danaher, Sean; Selige, Thomas

    1995-11-01

    A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.

  15. Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena

    2008-07-08

    We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less

  16. Classical stability of sudden and big rip singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrow, John D.; Lip, Sean Z. W.

    2009-08-15

    We introduce a general characterization of sudden cosmological singularities and investigate the classical stability of homogeneous and isotropic cosmological solutions of all curvatures containing these singularities to small scalar, vector, and tensor perturbations using gauge-invariant perturbation theory. We establish that sudden singularities at which the scale factor, expansion rate, and density are finite are stable except for a set of special parameter values. We also apply our analysis to the stability of Big Rip singularities and find the conditions for their stability against small scalar, vector, and tensor perturbations.

  17. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  18. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  19. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  20. Exploring the Common Dynamics of Homologous Proteins. Application to the Globin Family

    PubMed Central

    Maguid, Sandra; Fernandez-Alberti, Sebastian; Ferrelli, Leticia; Echave, Julian

    2005-01-01

    We present a procedure to explore the global dynamics shared between members of the same protein family. The method allows the comparison of patterns of vibrational motion obtained by Gaussian network model analysis. After the identification of collective coordinates that were conserved during evolution, we quantify the common dynamics within a family. Representative vectors that describe these dynamics are defined using a singular value decomposition approach. As a test case, the globin heme-binding family is considered. The two lowest normal modes are shown to be conserved within this family. Our results encourage the development of models for protein evolution that take into account the conservation of dynamical features. PMID:15749782

  1. Evaluation of constraint stabilization procedures for multibody dynamical systems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1987-01-01

    Comparative numerical studies of four constraint treatment techniques for the simulation of general multibody dynamic systems are presented, and results are presented for the example of a classical crank mechanism and for a simplified version of the seven-link manipulator deployment problem. The staggered stabilization technique (Park, 1986) is found to yield improved accuracy and robustness over Baumgarte's (1972) technique, the singular decomposition technique (Walton and Steeves, 1969), and the penalty technique (Lotstedt, 1979). Furthermore, the staggered stabilization technique offers software modularity, and the only data each solution module needs to exchange with the other is a set of vectors plus a common module to generate the gradient matrix of the constraints, B.

  2. Unimodularity criteria for Poisson structures on foliated manifolds

    NASA Astrophysics Data System (ADS)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  3. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  4. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  5. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  6. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  7. Nonnormal operators in physics, a singular-vectors approach: illustration in polarization optics.

    PubMed

    Tudor, Tiberiu

    2016-04-20

    The singular-vectors analysis of a general nonnormal operator defined on a finite-dimensional complex vector space is given in the frame of a pure operatorial ("nonmatrix," "coordinate-free") approach, performed in a Dirac language. The general results are applied in the field of polarization optics, where the nonnormal operators are widespread as operators of various polarization devices. Two nonnormal polarization devices representative for the class of nonnormal and even pathological operators-the standard two-layer elliptical ideal polarizer (singular operator) and the three-layer ambidextrous ideal polarizer (singular and defective operator)-are analyzed in detail. It is pointed out that the unitary polar component of the operator exists and preserves, in such pathological case too, its role of converting the input singular basis of the operator in its output singular basis. It is shown that for any nonnormal ideal polarizer a complementary one exists, so that the tandem of their operators uniquely determines their (common) unitary polar component.

  8. Singular-value decomposition of a tomosynthesis system

    PubMed Central

    Burvall, Anna; Barrett, Harrison H.; Myers, Kyle J.; Dainty, Christopher

    2010-01-01

    Tomosynthesis is an emerging technique with potential to replace mammography, since it gives 3D information at a relatively small increase in dose and cost. We present an analytical singular-value decomposition of a tomosynthesis system, which provides the measurement component of any given object. The method is demonstrated on an example object. The measurement component can be used as a reconstruction of the object, and can also be utilized in future observer studies of tomosynthesis image quality. PMID:20940966

  9. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  10. Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation

    DTIC Science & Technology

    1987-12-01

    residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter

  11. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)

    2002-01-01

    Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.

  12. The semantic representation of prejudice and stereotypes.

    PubMed

    Bhatia, Sudeep

    2017-07-01

    We use a theory of semantic representation to study prejudice and stereotyping. Particularly, we consider large datasets of newspaper articles published in the United States, and apply latent semantic analysis (LSA), a prominent model of human semantic memory, to these datasets to learn representations for common male and female, White, African American, and Latino names. LSA performs a singular value decomposition on word distribution statistics in order to recover word vector representations, and we find that our recovered representations display the types of biases observed in human participants using tasks such as the implicit association test. Importantly, these biases are strongest for vector representations with moderate dimensionality, and weaken or disappear for representations with very high or very low dimensionality. Moderate dimensional LSA models are also the best at learning race, ethnicity, and gender-based categories, suggesting that social category knowledge, acquired through dimensionality reduction on word distribution statistics, can facilitate prejudiced and stereotyped associations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Non-coaxial superposition of vector vortex beams.

    PubMed

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  14. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  15. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  16. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography

    PubMed Central

    Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.

    2010-01-01

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566

  17. The predictive power of singular value decomposition entropy for stock market dynamics

    NASA Astrophysics Data System (ADS)

    Caraiani, Petre

    2014-01-01

    We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.

  18. Multifractality in Cardiac Dynamics

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Rosenblum, Misha; Stanley, H. Eugene; Havlin, Shlomo; Goldberger, Ary

    1997-03-01

    Wavelet decomposition is used to analyze the fractal scaling properties of heart beat time series. The singularity spectrum D(h) of the variations in the beat-to-beat intervals is obtained from the wavelet transform modulus maxima which contain information on the hierarchical distribution of the singularities in the signal. Multifractal behavior is observed for healthy cardiac dynamics while pathologies are associated with loss of support in the singularity spectrum.

  19. Recurrence quantity analysis based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bian, Songhan; Shang, Pengjian

    2017-05-01

    Recurrence plot (RP) has turned into a powerful tool in many different sciences in the last three decades. To quantify the complexity and structure of RP, recurrence quantification analysis (RQA) has been developed based on the measures of recurrence density, diagonal lines, vertical lines and horizontal lines. This paper will study the RP based on singular value decomposition which is a new perspective of RP study. Principal singular value proportion (PSVP) will be proposed as one new RQA measure and bigger PSVP means higher complexity for one system. In contrast, smaller PSVP reflects a regular and stable system. Considering the advantage of this method in detecting the complexity and periodicity of systems, several simulation and real data experiments are chosen to examine the performance of this new RQA.

  20. Kac determinant and singular vector of the level N representation of Ding-Iohara-Miki algebra

    NASA Astrophysics Data System (ADS)

    Ohkubo, Yusuke

    2018-05-01

    In this paper, we obtain the formula for the Kac determinant of the algebra arising from the level N representation of the Ding-Iohara-Miki algebra. It is also discovered that its singular vectors correspond to generalized Macdonald functions (the q-deformed version of the AFLT basis).

  1. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  2. Singular reduction of resonant Hamiltonians

    NASA Astrophysics Data System (ADS)

    Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2018-06-01

    We investigate the dynamics of resonant Hamiltonians with n degrees of freedom to which we attach a small perturbation. Our study is based on the geometric interpretation of singular reduction theory. The flow of the Hamiltonian vector field is reconstructed from the cross sections corresponding to an approximation of this vector field in an energy surface. This approximate system is also built using normal forms and applying reduction theory obtaining the reduced Hamiltonian that is defined on the orbit space. Generically, the reduction is of singular character and we classify the singularities in the orbit space, getting three different types of singular points. A critical point of the reduced Hamiltonian corresponds to a family of periodic solutions in the full system whose characteristic multipliers are approximated accordingly to the nature of the critical point.

  3. Predictability of a Coupled Model of ENSO Using Singular Vector Analysis: Optimal Growth and Forecast Skill.

    NASA Astrophysics Data System (ADS)

    Xue, Yan

    The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.

  4. Non-Cooperative Target Recognition by Means of Singular Value Decomposition Applied to Radar High Resolution Range Profiles †

    PubMed Central

    López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio

    2015-01-01

    Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484

  5. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  6. Detecting chaos, determining the dimensions of tori and predicting slow diffusion in Fermi-Pasta-Ulam lattices by the Generalized Alignment Index method

    NASA Astrophysics Data System (ADS)

    Skokos, C.; Bountis, T.; Antonopoulos, C.

    2008-12-01

    The recently introduced GALI method is used for rapidly detecting chaos, determining the dimensionality of regular motion and predicting slow diffusion in multi-dimensional Hamiltonian systems. We propose an efficient computation of the GALIk indices, which represent volume elements of k randomly chosen deviation vectors from a given orbit, based on the Singular Value Decomposition (SVD) algorithm. We obtain theoretically and verify numerically asymptotic estimates of GALIs long-time behavior in the case of regular orbits lying on low-dimensional tori. The GALIk indices are applied to rapidly detect chaotic oscillations, identify low-dimensional tori of Fermi-Pasta-Ulam (FPU) lattices at low energies and predict weak diffusion away from quasiperiodic motion, long before it is actually observed in the oscillations.

  7. Topological features of vector vortex beams perturbed with uniformly polarized light

    PubMed Central

    D’Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams. PMID:28079134

  8. Topological features of vector vortex beams perturbed with uniformly polarized light

    NASA Astrophysics Data System (ADS)

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  9. Topological features of vector vortex beams perturbed with uniformly polarized light.

    PubMed

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-12

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell's equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  10. Signal evaluations using singular value decomposition for Thomson scattering diagnostics.

    PubMed

    Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K

    2014-11-01

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  11. Signal evaluations using singular value decomposition for Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.

    2014-11-15

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  12. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  13. Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value Decomposition (SVD)

    DTIC Science & Technology

    2017-09-27

    ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...originator. ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...unlimited. October 2015–January 2016 US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005-5066 primary author’s email

  14. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  15. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  16. Incorporation of perceptually adaptive QIM with singular value decomposition for blind audio watermarking

    NASA Astrophysics Data System (ADS)

    Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan

    2014-12-01

    This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks.

  17. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  18. On the singular perturbations for fractional differential equation.

    PubMed

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.

  19. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    NASA Astrophysics Data System (ADS)

    Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-04-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.

  20. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    PubMed Central

    Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-01-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683

  1. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  2. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  3. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    PubMed

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  4. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  5. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  6. Conjunct rotation: Codman's paradox revisited.

    PubMed

    Wolf, Sebastian I; Fradet, Laetitia; Rettig, Oliver

    2009-05-01

    This contribution mathematically formalizes Codman's idea of conjunct rotation, a term he used in 1934 to describe a paradoxical phenomenon arising from a closed-loop arm movement. Real (axial) rotation is distinguished from conjunct rotation. For characterizing the latter, the idea of reference vector fields is developed to define the neutral axial position of the humerus for any given orientation of its long axis. This concept largely avoids typical coordinate singularities arising from decomposition of 3D joint motion and therefore can be used for postural (axial) assessment of the shoulder joint both clinically and in sports science in almost the complete accessible range of motion. The concept, even though algebraic rather complex, might help to get an easier and more intuitive understanding of axial rotation of the shoulder in complex movements present in daily life and in sports.

  7. Using linear algebra for protein structural comparison and classification

    PubMed Central

    2009-01-01

    In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in. PMID:21637532

  8. Using linear algebra for protein structural comparison and classification.

    PubMed

    Gomide, Janaína; Melo-Minardi, Raquel; Dos Santos, Marcos Augusto; Neshich, Goran; Meira, Wagner; Lopes, Júlio César; Santoro, Marcelo

    2009-07-01

    In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in.

  9. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  10. A singular value decomposition approach for improved taxonomic classification of biological sequences

    PubMed Central

    2011-01-01

    Background Singular value decomposition (SVD) is a powerful technique for information retrieval; it helps uncover relationships between elements that are not prima facie related. SVD was initially developed to reduce the time needed for information retrieval and analysis of very large data sets in the complex internet environment. Since information retrieval from large-scale genome and proteome data sets has a similar level of complexity, SVD-based methods could also facilitate data analysis in this research area. Results We found that SVD applied to amino acid sequences demonstrates relationships and provides a basis for producing clusters and cladograms, demonstrating evolutionary relatedness of species that correlates well with Linnaean taxonomy. The choice of a reasonable number of singular values is crucial for SVD-based studies. We found that fewer singular values are needed to produce biologically significant clusters when SVD is employed. Subsequently, we developed a method to determine the lowest number of singular values and fewest clusters needed to guarantee biological significance; this system was developed and validated by comparison with Linnaean taxonomic classification. Conclusions By using SVD, we can reduce uncertainty concerning the appropriate rank value necessary to perform accurate information retrieval analyses. In tests, clusters that we developed with SVD perfectly matched what was expected based on Linnaean taxonomy. PMID:22369633

  11. Characteristic classes, singular embeddings, and intersection homology.

    PubMed

    Cappell, S E; Shaneson, J L

    1987-06-01

    This note announces some results on the relationship between global invariants and local topological structure. The first section gives a local-global formula for Pontrjagin classes or L-classes. The second section describes a corresponding decomposition theorem on the level of complexes of sheaves. A final section mentions some related aspects of "singular knot theory" and the study of nonisolated singularities. Analogous equivariant analogues, with local-global formulas for Atiyah-Singer classes and their relations to G-signatures, will be presented in a future paper.

  12. New solution decomposition and minimization schemes for Poisson-Boltzmann equation in calculation of biomolecular electrostatics

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan

    2014-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.

  13. Tailored vectorial light fields: flower, spider web and hybrid structures

    NASA Astrophysics Data System (ADS)

    Otte, Eileen; Alpmann, Christina; Denz, Cornelia

    2017-04-01

    We present the realization and analysis of tailored vector fields including polarization singularities. The fields are generated by a holographic method based on an advanced system including a spatial light modulator. We demonstrate our systems capabilities realizing specifically customized vector fields including stationary points of defined polarization in its transverse plane. Subsequently, vectorial flowers and spider webs as well as unique hybrid structures of these are introduced, and embedded singular points are characterized. These sophisticated light fields reveal attractive properties that pave the way to advanced application in e.g. optical micromanipulation. Beyond particle manipulation, they contribute essentially to actual questions in singular optics.

  14. High-order optical vortex position detection using a Shack-Hartmann wavefront sensor.

    PubMed

    Luo, Jia; Huang, Hongxin; Matsui, Yoshinori; Toyoda, Haruyoshi; Inoue, Takashi; Bai, Jian

    2015-04-06

    Optical vortex (OV) beams have null-intensity singular points, and the intensities in the region surrounding the singular point are quite low. This low intensity region influences the position detection accuracy of phase singular point, especially for high-order OV beam. In this paper, we propose a new method for solving this problem, called the phase-slope-combining correlation matching method. A Shack-Hartmann wavefront sensor (SH-WFS) is used to measure phase slope vectors at lenslet positions of the SH-WFS. Several phase slope vectors are combined into one to reduce the influence of low-intensity regions around the singular point, and the combined phase slope vectors are used to determine the OV position with the aid of correlation matching with a pre-calculated database. Experimental results showed that the proposed method works with high accuracy, even when detecting an OV beam with a topological charge larger than six. The estimated precision was about 0.15 in units of lenslet size when detecting an OV beam with a topological charge of up to 20.

  15. High quality high spatial resolution functional classification in low dose dynamic CT perfusion using singular value decomposition (SVD) and k-means clustering

    NASA Astrophysics Data System (ADS)

    Pisana, Francesco; Henzler, Thomas; Schönberg, Stefan; Klotz, Ernst; Schmidt, Bernhard; Kachelrieß, Marc

    2017-03-01

    Dynamic CT perfusion acquisitions are intrinsically high-dose examinations, due to repeated scanning. To keep radiation dose under control, relatively noisy images are acquired. Noise is then further enhanced during the extraction of functional parameters from the post-processing of the time attenuation curves of the voxels (TACs) and normally some smoothing filter needs to be employed to better visualize any perfusion abnormality, but sacrificing spatial resolution. In this study we propose a new method to detect perfusion abnormalities keeping both high spatial resolution and high CNR. To do this we first perform the singular value decomposition (SVD) of the original noisy spatial temporal data matrix to extract basis functions of the TACs. Then we iteratively cluster the voxels based on a smoothed version of the three most significant singular vectors. Finally, we create high spatial resolution 3D volumes where to each voxel is assigned a distance from the centroid of each cluster, showing how functionally similar each voxel is compared to the others. The method was tested on three noisy clinical datasets: one brain perfusion case with an occlusion in the left internal carotid, one healthy brain perfusion case, and one liver case with an enhancing lesion. Our method successfully detected all perfusion abnormalities with higher spatial precision when compared to the functional maps obtained with a commercially available software. We conclude this method might be employed to have a rapid qualitative indication of functional abnormalities in low dose dynamic CT perfusion datasets. The method seems to be very robust with respect to both spatial and temporal noise and does not require any special a priori assumption. While being more robust respect to noise and with higher spatial resolution and CNR when compared to the functional maps, our method is not quantitative and a potential usage in clinical routine could be as a second reader to assist in the maps evaluation, or to guide a dataset smoothing before the modeling part.

  16. On the Singular Perturbations for Fractional Differential Equation

    PubMed Central

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357

  17. Boosting brain connectome classification accuracy in Alzheimer's disease using higher-order singular value decomposition

    PubMed Central

    Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.

    2015-01-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601

  18. Primer Vector Optimization: Survey of Theory, New Analysis and Applications

    NASA Technical Reports Server (NTRS)

    Guzman, J. J.; Mailhe, L. M.; Schiff, C.; Hughes, S. P.; Folta, D. C.

    2002-01-01

    In this paper, a summary of primer vector theory is presented. The applicability of primer vector theory is examined in an effort to understand when and why the theory can fail. For example, since the Calculus of Variations is based on "small" variations, singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyse that employ small variations. Two examples, the initialization of an orbit and a line of apsides rotation, are presented. Recommendations, future work, and the possible addition of other optimization techniques are also discussed.

  19. Non-singular spherical harmonic expressions of geomagnetic vector and gradient tensor fields in the local north-oriented reference frame

    NASA Astrophysics Data System (ADS)

    Du, J.; Chen, C.; Lesur, V.; Wang, L.

    2014-12-01

    General expressions of magnetic vector (MV) and magnetic gradient tensor (MGT) in terms of the first- and second-order derivatives of spherical harmonics at different degrees and orders, are relatively complicated and singular at the poles. In this paper, we derived alternative non-singular expressions for the MV, the MGT and also the higher-order partial derivatives of the magnetic field in local north-oriented reference frame. Using our newly derived formulae, the magnetic potential, vector and gradient tensor fields at an altitude of 300 km are calculated based on a global lithospheric magnetic field model GRIMM_L120 (version 0.0) and the main magnetic field model of IGRF11. The corresponding results at the poles are discussed and the validity of the derived formulas is verified using the Laplace equation of the potential field.

  20. A novel image watermarking method based on singular value decomposition and digital holography

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan

    2016-10-01

    According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.

  1. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  2. Theory of bright-field scanning transmission electron microscopy for tomography

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.

    2005-02-01

    Radiation transport theory is applied to electron microscopy of samples composed of one or more materials. The theory, originally due to Goudsmit and Saunderson, assumes only elastic scattering and an amorphous medium dominated by atomic interactions. For samples composed of a single material, the theory yields reasonable parameter-free agreement with experimental data taken from the literature for the multiple scattering of 300-keV electrons through aluminum foils up to 25μm thick. For thin films, the theory gives a validity condition for Beer's law. For thick films, a variant of Molière's theory [V. G. Molière, Z. Naturforschg. 3a, 78 (1948)] of multiple scattering leads to a form for the bright-field signal for foils in the multiple-scattering regime. The signal varies as [tln(e1-2γt/τ)]-1 where t is the path length of the beam, τ is the mean free path for elastic scattering, and γ is Euler's constant. The Goudsmit-Saunderson solution interpolates numerically between these two limits. For samples with multiple materials, elemental sensitivity is developed through the angular dependence of the scattering. From the elastic scattering cross sections of the first 92 elements, a singular-value decomposition of a vector space spanned by the elastic scattering cross sections minus a delta function shows that there is a dominant common mode, with composition-dependent corrections of about 2%. A mathematically correct reconstruction procedure beyond 2% accuracy requires the acquisition of the bright-field signal as a function of the scattering angle. Tomographic reconstructions are carried out for three singular vectors of a sample problem with four elements Cr, Cu, Zr, and Te. The three reconstructions are presented jointly as a color image; all four elements are clearly identifiable throughout the image.

  3. Complex numbers in chemometrics: examples from multivariate impedance measurements on lipid monolayers.

    PubMed

    Geladi, Paul; Nelson, Andrew; Lindholm-Sethson, Britta

    2007-07-09

    Electrical impedance gives multivariate complex number data as results. Two examples of multivariate electrical impedance data measured on lipid monolayers in different solutions give rise to matrices (16x50 and 38x50) of complex numbers. Multivariate data analysis by principal component analysis (PCA) or singular value decomposition (SVD) can be used for complex data and the necessary equations are given. The scores and loadings obtained are vectors of complex numbers. It is shown that the complex number PCA and SVD are better at concentrating information in a few components than the naïve juxtaposition method and that Argand diagrams can replace score and loading plots. Different concentrations of Magainin and Gramicidin A give different responses and also the role of the electrolyte medium can be studied. An interaction of Gramicidin A in the solution with the monolayer over time can be observed.

  4. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  5. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  6. Observer-dependent sign inversions of polarization singularities.

    PubMed

    Freund, Isaac

    2014-10-15

    We describe observer-dependent sign inversions of the topological charges of vector field polarization singularities: C points (points of circular polarization), L points (points of linear polarization), and two virtually unknown singularities we call γ(C) and α(L) points. In all cases, the sign of the charge seen by an observer can change as she changes the direction from which she views the singularity. Analytic formulas are given for all C and all L point sign inversions.

  7. Black hole and cosmos with multiple horizons and multiple singularities in vector-tensor theories

    NASA Astrophysics Data System (ADS)

    Gao, Changjun; Lu, Youjun; Yu, Shuang; Shen, You-Gen

    2018-05-01

    A stationary and spherically symmetric black hole (e.g., Reissner-Nordström black hole or Kerr-Newman black hole) has, at most, one singularity and two horizons. One horizon is the outer event horizon and the other is the inner Cauchy horizon. Can we construct static and spherically symmetric black hole solutions with N horizons and M singularities? The de Sitter cosmos has only one apparent horizon. Can we construct cosmos solutions with N horizons? In this article, we present the static and spherically symmetric black hole and cosmos solutions with N horizons and M singularities in the vector-tensor theories. Following these motivations, we also construct the black hole solutions with a firewall. The deviation of these black hole solutions from the usual ones can be potentially tested by future measurements of gravitational waves or the black hole continuum spectrum.

  8. Correlation singularities in a partially coherent electromagnetic beam with initially radial polarization.

    PubMed

    Zhang, Yongtao; Cui, Yan; Wang, Fei; Cai, Yangjian

    2015-05-04

    We have investigated the correlation singularities, coherence vortices of two-point correlation function in a partially coherent vector beam with initially radial polarization, i.e., partially coherent radially polarized (PCRP) beam. It is found that these singularities generally occur during free space propagation. Analytical formulae for characterizing the dynamics of the correlation singularities on propagation are derived. The influence of the spatial coherence length of the beam on the evolution properties of the correlation singularities and the conditions for creation and annihilation of the correlation singularities during propagation have been studied in detail based on the derived formulae. Some interesting results are illustrated. These correlation singularities have implication for interference experiments with a PCRP beam.

  9. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  10. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  11. The Singular Set of Solutions to Non-Differentiable Elliptic Systems

    NASA Astrophysics Data System (ADS)

    Mingione, Giuseppe

    We estimate the Hausdorff dimension of the singular set of solutions to elliptic systems of the type If the vector fields a and b are Hölder continuous with respect to the variable x with exponent α, then the Hausdorff dimension of the singular set of any weak solution is at most n-2α.

  12. Spatio-temporal evolution of perturbations in ensembles initialized by bred, Lyapunov and singular vectors

    NASA Astrophysics Data System (ADS)

    Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.

    2010-05-01

    We study the evolution of finite perturbations in the Lorenz ‘96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.

  13. Spatio-temporal evolution of perturbations in ensembles initialized by bred, Lyapunov and singular vectors

    NASA Astrophysics Data System (ADS)

    Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.

    2010-01-01

    We study the evolution of finite perturbations in the Lorenz `96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.

  14. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  15. Rortex—A new vortex vector definition and vorticity tensor and vector decompositions

    NASA Astrophysics Data System (ADS)

    Liu, Chaoqun; Gao, Yisheng; Tian, Shuling; Dong, Xiangrui

    2018-03-01

    A vortex is intuitively recognized as the rotational/swirling motion of the fluids. However, an unambiguous and universally accepted definition for vortex is yet to be achieved in the field of fluid mechanics, which is probably one of the major obstacles causing considerable confusions and misunderstandings in turbulence research. In our previous work, a new vector quantity that is called vortex vector was proposed to accurately describe the local fluid rotation and clearly display vortical structures. In this paper, the definition of the vortex vector, named Rortex here, is revisited from the mathematical perspective. The existence of the possible rotational axis is proved through real Schur decomposition. Based on real Schur decomposition, a fast algorithm for calculating Rortex is also presented. In addition, new vorticity tensor and vector decompositions are introduced: the vorticity tensor is decomposed to a rigidly rotational part and a non-rotationally anti-symmetric part, and the vorticity vector is decomposed to a rigidly rotational vector which is called the Rortex vector and a non-rotational vector which is called the shear vector. Several cases, including the 2D Couette flow, 2D rigid rotational flow, and 3D boundary layer transition on a flat plate, are studied to demonstrate the justification of the definition of Rortex. It can be observed that Rortex identifies both the precise swirling strength and the rotational axis, and thus it can reasonably represent the local fluid rotation and provide a new powerful tool for vortex dynamics and turbulence research.

  16. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    PubMed

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  17. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    PubMed Central

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709

  18. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  19. Non-singular spherical harmonic expressions of geomagnetic vector and gradient tensor fields in the local north-oriented reference frame

    NASA Astrophysics Data System (ADS)

    Du, J.; Chen, C.; Lesur, V.; Wang, L.

    2015-07-01

    General expressions of magnetic vector (MV) and magnetic gradient tensor (MGT) in terms of the first- and second-order derivatives of spherical harmonics at different degrees/orders are relatively complicated and singular at the poles. In this paper, we derived alternative non-singular expressions for the MV, the MGT and also the third-order partial derivatives of the magnetic potential field in the local north-oriented reference frame. Using our newly derived formulae, the magnetic potential, vector and gradient tensor fields and also the third-order partial derivatives of the magnetic potential field at an altitude of 300 km are calculated based on a global lithospheric magnetic field model GRIMM_L120 (GFZ Reference Internal Magnetic Model, version 0.0) with spherical harmonic degrees 16-90. The corresponding results at the poles are discussed and the validity of the derived formulas is verified using the Laplace equation of the magnetic potential field.

  20. An examination of the concept of driving point receptance

    NASA Astrophysics Data System (ADS)

    Sheng, X.; He, Y.; Zhong, T.

    2018-04-01

    In the field of vibration, driving point receptance is a well-established and widely applied concept. However, as demonstrated in this paper, when a driving point receptance is calculated using the finite element (FE) method with solid elements, it does not converge as the FE mesh becomes finer, suggesting that there is a singularity. Hence, the concept of driving point receptance deserves a rigorous examination. In this paper, it is firstly shown that, for a point harmonic force applied on the surface of an elastic half-space, the Boussinesq formula can be applied to calculate the displacement amplitude of the surface if the response point is sufficiently close to the load. Secondly, by applying the Betti reciprocal theorem, it is shown that the displacement of an elastic body near a point harmonic force can be decomposed into two parts, with the first one being the displacement of an elastic half-space. This decomposition is useful, since it provides a solid basis for the introduction of a contact spring between a wheel and a rail in interaction. However, according to the Boussinesq formula, this decomposition also leads to the conclusion that a driving point receptance is infinite (singular), and would be undefinable. Nevertheless, driving point receptances have been calculated using different methods. Since the singularity identified in this paper was not appreciated, no account was given to the singularity in these calculations. Thus, the validity of these calculation methods must be examined. This constructs the third part of the paper. As the final development of the paper, the above decomposition is utilised to define and determine driving point receptances required for dealing with wheel/rail interactions.

  1. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  2. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Wiimote Experiments: 3-D Inclined Plane Problem for Reinforcing the Vector Concept

    ERIC Educational Resources Information Center

    Kawam, Alae; Kouh, Minjoon

    2011-01-01

    In an introductory physics course where students first learn about vectors, they oftentimes struggle with the concept of vector addition and decomposition. For example, the classic physics problem involving a mass on an inclined plane requires the decomposition of the force of gravity into two directions that are parallel and perpendicular to the…

  4. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  5. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  6. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  7. A non-orthogonal decomposition of flows into discrete events

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Lewalle, Jacques

    1998-11-01

    This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.

  8. Boosting Classification Accuracy of Diffusion MRI Derived Brain Networks for the Subtypes of Mild Cognitive Impairment Using Higher Order Singular Value Decomposition

    PubMed Central

    Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.

    2015-01-01

    Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202

  9. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  10. The relationship between two fast/slow analysis techniques for bursting oscillations

    PubMed Central

    Teka, Wondimu; Tabak, Joël; Bertram, Richard

    2012-01-01

    Bursting oscillations in excitable systems reflect multi-timescale dynamics. These oscillations have often been studied in mathematical models by splitting the equations into fast and slow subsystems. Typically, one treats the slow variables as parameters of the fast subsystem and studies the bifurcation structure of this subsystem. This has key features such as a z-curve (stationary branch) and a Hopf bifurcation that gives rise to a branch of periodic spiking solutions. In models of bursting in pituitary cells, we have recently used a different approach that focuses on the dynamics of the slow subsystem. Characteristic features of this approach are folded node singularities and a critical manifold. In this article, we investigate the relationships between the key structures of the two analysis techniques. We find that the z-curve and Hopf bifurcation of the two-fast/one-slow decomposition are closely related to the voltage nullcline and folded node singularity of the one-fast/two-slow decomposition, respectively. They become identical in the double singular limit in which voltage is infinitely fast and calcium is infinitely slow. PMID:23278052

  11. Polarization singularity indices in Gaussian laser beams

    NASA Astrophysics Data System (ADS)

    Freund, Isaac

    2002-01-01

    Two types of point singularities in the polarization of a paraxial Gaussian laser beam are discussed in detail. V-points, which are vector point singularities where the direction of the electric vector of a linearly polarized field becomes undefined, and C-points, which are elliptic point singularities where the ellipse orientations of elliptically polarized fields become undefined. Conventionally, V-points are characterized by the conserved integer valued Poincaré-Hopf index η, with generic value η=±1, while C-points are characterized by the conserved half-integer singularity index IC, with generic value IC=±1/2. Simple algorithms are given for generating V-points with arbitrary positive or negative integer indices, including zero, at arbitrary locations, and C-points with arbitrary positive or negative half-integer or integer indices, including zero, at arbitrary locations. Algorithms are also given for generating continuous lines of these singularities in the plane, V-lines and C-lines. V-points and C-points may be transformed one into another. A topological index based on directly measurable Stokes parameters is used to discuss this transformation. The evolution under propagation of V-points and C-points initially embedded in the beam waist is studied, as is the evolution of V-dipoles and C-dipoles.

  12. Singular spectrum and singular entropy used in signal processing of NC table

    NASA Astrophysics Data System (ADS)

    Wang, Linhong; He, Yiwen

    2011-12-01

    NC (numerical control) table is a complex dynamic system. The dynamic characteristics caused by backlash, friction and elastic deformation among each component are so complex that they have become the bottleneck of enhancing the positioning accuracy, tracking accuracy and dynamic behavior of NC table. This paper collects vibration acceleration signals from NC table, analyzes the signals with SVD (singular value decomposition) method, acquires the singular spectrum and calculates the singular entropy of the signals. The signal characteristics and their regulations of NC table are revealed via the characteristic quantities such as singular spectrum, singular entropy etc. The steep degrees of singular spectrums can be used to discriminate complex degrees of signals. The results show that the signals in direction of driving axes are the simplest and the signals in perpendicular direction are the most complex. The singular entropy values can be used to study the indetermination of signals. The results show that the signals of NC table are not simple signal nor white noise, the entropy values in direction of driving axe are lower, the entropy values increase along with the increment of driving speed and the entropy values at the abnormal working conditions such as resonance or creeping etc decrease obviously.

  13. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2

  14. Magnetic charge and photon mass: Physical string singularities, Dirac condition, and magnetic confinement

    NASA Astrophysics Data System (ADS)

    Evans, Timothy J.; Singleton, Douglas

    2018-04-01

    We find exact, simple solutions to the Proca version of Maxwell’s equations with magnetic sources. Several properties of these solutions differ from the usual case of magnetic charge with a massless photon: (i) the string singularities of the usual 3-vector potentials become real singularities in the magnetic fields; (ii) the different 3-vector potentials become gauge inequivalent and physically distinct solutions; (iii) the magnetic field depends on r and 𝜃 and thus is no longer rotationally symmetric; (iv) a combined system of electric and magnetic charge carries a field angular momentum even when the electric and magnetic charges are located at the same place (i.e. for dyons); (v) for these dyons, one recovers the standard Dirac condition despite the photon being massive. We discuss the reason for this. We conclude by proposing that the string singularity in the magnetic field of an isolated magnetic charge suggests a confinement mechanism for magnetic charge, similar to the flux tube confinement of quarks in QCD.

  15. Optical character recognition with feature extraction and associative memory matrix

    NASA Astrophysics Data System (ADS)

    Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa

    1998-06-01

    A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.

  16. Singular-value demodulation of phase-shifted holograms.

    PubMed

    Lopes, Fernando; Atlan, Michael

    2015-06-01

    We report on phase-shifted holographic interferogram demodulation by singular-value decomposition. Numerical processing of optically acquired interferograms over several modulation periods was performed in two steps: (1) rendering of off-axis complex-valued holograms by Fresnel transformation of the interferograms; and (2) eigenvalue spectrum assessment of the lag-covariance matrix of hologram pixels. Experimental results in low-light recording conditions were compared with demodulation by Fourier analysis, in the presence of random phase drifts.

  17. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  18. Dual Vector Spaces and Physical Singularities

    NASA Astrophysics Data System (ADS)

    Rowlands, Peter

    Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.

  19. Singularly Perturbed Lie Bracket Approximation

    DOE PAGES

    Durr, Hans-Bernd; Krstic, Miroslav; Scheinker, Alexander; ...

    2015-03-27

    Here, we consider the interconnection of two dynamical systems where one has an input-affine vector field. We show that by employing a singular perturbation analysis and the Lie bracket approximation technique, the stability of the overall system can be analyzed by regarding the stability properties of two reduced, uncoupled systems.

  20. Singular value decomposition approach to the yttrium occurrence in mineral maps of rare earth element ores using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Romppanen, Sari; Häkkänen, Heikki; Kaski, Saara

    2017-08-01

    Laser-induced breakdown spectroscopy (LIBS) has been used in analysis of rare earth element (REE) ores from the geological formation of Norra Kärr Alkaline Complex in southern Sweden. Yttrium has been detected in eudialyte (Na15 Ca6(Fe,Mn)3 Zr3Si(Si25O73)(O,OH,H2O)3 (OH,Cl)2) and catapleiite (Ca/Na2ZrSi3O9·2H2O). Singular value decomposition (SVD) has been employed in classification of the minerals in the rock samples and maps representing the mineralogy in the sampled area have been constructed. Based on the SVD classification the percentage of the yttrium-bearing ore minerals can be calculated even in fine-grained rock samples.

  1. Using Rényi parameter to improve the predictive power of singular value decomposition entropy on stock market

    NASA Astrophysics Data System (ADS)

    Jiang, Jiaqi; Gu, Rongbao

    2016-04-01

    This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.

  2. Singular value decomposition of received ultrasound signal to separate tissue, blood flow, and cavitation signals

    NASA Astrophysics Data System (ADS)

    Ikeda, Hayato; Nagaoka, Ryo; Lafond, Maxime; Yoshizawa, Shin; Iwasaki, Ryosuke; Maeda, Moe; Umemura, Shin-ichiro; Saijo, Yoshifumi

    2018-07-01

    High-intensity focused ultrasound is a noninvasive treatment applied by externally irradiating ultrasound to the body to coagulate the target tissue thermally. Recently, it has been proposed as a noninvasive treatment for vascular occlusion to replace conventional invasive treatments. Cavitation bubbles generated by the focused ultrasound can accelerate the effect of thermal coagulation. However, the tissues surrounding the target may be damaged by cavitation bubbles generated outside the treatment area. Conventional methods based on Doppler analysis only in the time domain are not suitable for monitoring blood flow in the presence of cavitation. In this study, we proposed a novel filtering method based on the differences in spatiotemporal characteristics, to separate tissue, blood flow, and cavitation by employing singular value decomposition. Signals from cavitation and blood flow were extracted automatically using spatial and temporal covariance matrices.

  3. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  4. Separable decompositions of bipartite mixed states

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.

  5. Three-dimensional modelling and geothermal process simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, K.L.

    1990-01-01

    The subsurface geological model or 3-D GIS is constructed from three kinds of objects, which are a lithotope (in boundary representation), a number of fault systems, and volumetric textures (vector fields). The chief task of the model is to yield an estimate of the conductance tensors (fluid permeability and thermal conductivity) throughout an array of voxels. This is input as material properties to a FEHM numerical physical process model. The main task of the FEHM process model is to distinguish regions of convective from regions of conductive heat flow, and to estimate the fluid phase, pressure and flow paths. Themore » temperature, geochemical, and seismic data provide the physical constraints on the process. The conductance tensors in the Franciscan Complex are to be derived by the addition of two components. The isotropic component is a stochastic spatial variable due to disruption of lithologies in melange. The deviatoric component is deterministic, due to smoothness and continuity in the textural vector fields. This decomposition probably also applies to the engineering hydrogeological properties of shallow terrestrial fluvial systems. However there are differences in quantity. The isotropic component is much more variable in the Franciscan, to the point where volumetric averages are misleading, and it may be necessary to select that component from several, discrete possible states. The deviatoric component is interpolated using a textural vector field. The Franciscan field is much more complicated, and contains internal singularities. 27 refs., 10 figs.« less

  6. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  7. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  8. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  9. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  10. Numerical Methods for Partial Differential Equations.

    DTIC Science & Technology

    1984-01-09

    Y2] L(i) . ) M Q)(i) - where R is k x k upper triangular. Rill Y1 2, is lower triangular, Y ,2 and the parameters of the rotations that make F i up Q...then x is a left singular vector of B and y is a right singular vector of B (5]. Thus we may attempt (1) x c*x + oy to find the eigendecomposition of C...After a sym- y ’ - -ox + cy metric interchange of rows and columns corresponding to the permutation (n+l, 2, n+2, 2, ..., 2n, n), where x, y , and c

  11. Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and SVD.

    PubMed

    Bullinaria, John A; Levy, Joseph P

    2012-09-01

    In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.

  12. Face recognition using tridiagonal matrix enhanced multivariance products representation

    NASA Astrophysics Data System (ADS)

    Ã-zay, Evrim Korkmaz

    2017-01-01

    This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.

  13. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  14. AGT relations for abelian quiver gauge theories on ALE spaces

    NASA Astrophysics Data System (ADS)

    Pedrini, Mattia; Sala, Francesco; Szabo, Richard J.

    2016-05-01

    We construct level one dominant representations of the affine Kac-Moody algebra gl̂k on the equivariant cohomology groups of moduli spaces of rank one framed sheaves on the orbifold compactification of the minimal resolution Xk of the Ak-1 toric singularity C2 /Zk. We show that the direct sum of the fundamental classes of these moduli spaces is a Whittaker vector for gl̂k, which proves the AGT correspondence for pure N = 2 U(1) gauge theory on Xk. We consider Carlsson-Okounkov type Ext-bundles over products of the moduli spaces and use their Euler classes to define vertex operators. Under the decomposition gl̂k ≃ h ⊕sl̂k, these vertex operators decompose as products of bosonic exponentials associated to the Heisenberg algebra h and primary fields of sl̂k. We use these operators to prove the AGT correspondence for N = 2 superconformal abelian quiver gauge theories on Xk.

  15. Analysis of protein circular dichroism spectra for secondary structure using a simple matrix multiplication.

    PubMed

    Compton, L A; Johnson, W C

    1986-05-15

    Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.

  16. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  17. Elastic and acoustic wavefield decompositions and application to reverse time migrations

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong

    P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.

  18. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  19. Cost Prediction via Quantitative Analysis of Complexity in U.S. Navy Shipbuilding

    DTIC Science & Technology

    2014-06-01

    in regards to the analysis of advanced sensors and weaponry, the summation of singular values via a singular value decomposition will be used in the...In the DDG 51 class, the Main Reduction Gear (MRG) reduces the 3600-RPM produced by the LM-2500 gas turbines to approximately 168-RPM (at full...RDT&E efforts are currently underway to reduce complexity of the MCS by developing a wireless approach that will concurrently boost the host ship’s

  20. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  1. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  2. Multiset singular value decomposition for joint analysis of multi-modal data: application to fingerprint analysis

    NASA Astrophysics Data System (ADS)

    Emge, Darren K.; Adalı, Tülay

    2014-06-01

    As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.

  3. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  4. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  5. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  6. Incoherent averaging of phase singularities in speckle-shearing interferometry.

    PubMed

    Mantel, Klaus; Nercissian, Vanusch; Lindlein, Norbert

    2014-08-01

    Interferometric speckle techniques are plagued by the omnipresence of phase singularities, impairing the phase unwrapping process. To reduce the number of phase singularities by physical means, an incoherent averaging of multiple speckle fields may be applied. It turns out, however, that the results may strongly deviate from the expected √N behavior. Using speckle-shearing interferometry as an example, we investigate the mechanism behind the reduction of phase singularities, both by calculations and by computer simulations. Key to an understanding of the reduction mechanism during incoherent averaging is the representation of the physical averaging process in terms of certain vector fields associated with each speckle field.

  7. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  8. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  9. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  10. Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.

    2009-12-15

    We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), andmore » (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.« less

  11. Computing singularities of perturbation series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kvaal, Simen; Jarlebring, Elias; Michiels, Wim

    2011-03-15

    Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less

  12. The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons

    NASA Astrophysics Data System (ADS)

    Kweon, Jae Ryong

    2017-03-01

    In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.

  13. Geostrophic balance with a full Coriolis Force: implications for low latitutde studies

    NASA Technical Reports Server (NTRS)

    Juarez, M. de la Torre

    2002-01-01

    In its standard form, geostrophic balance uses a partial representation of the Coriolis force. The resulting formation has a singularity at the equator, and violates mass and momentum conservation. When the horizontal projection of the planetary rotation vector is considered, the singularity at the equator disappears, continuity can be preserved, and quasigeostrophy can be formulated at planetary scale.

  14. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    PubMed

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.

  15. DUALITY IN MULTIVARIATE RECEPTOR MODEL. (R831078)

    EPA Science Inventory

    Multivariate receptor models are used for source apportionment of multiple observations of compositional data of air pollutants that obey mass conservation. Singular value decomposition of the data leads to two sets of eigenvectors. One set of eigenvectors spans a space in whi...

  16. svdPPCS: an effective singular value decomposition-based method for conserved and divergent co-expression gene module identification.

    PubMed

    Zhang, Wensheng; Edwards, Andrea; Fan, Wei; Zhu, Dongxiao; Zhang, Kun

    2010-06-22

    Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods. Using the singular value decomposition (SVD) technique, we developed a new computational tool, named svdPPCS (SVD-based Pattern Pairing and Chart Splitting), to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG) and malpighian tubule (MT) tissues of the line W118 of M. drosophila. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR < 0.1. One divergent module suggested the tissue-specific relationship between the expressions of mitochondrion-related genes and the aging process. This finding, together with others, may be of biological significance. The validity of the proposed SVD-based method was further verified by a simulation study, as well as the comparisons with regression analysis and cubic spline regression analysis plus PAM based clustering. svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time series data of related organisms or different tissues of the same organism under equivalent or similar experimental conditions. The general scheme can be directly extended to the comparisons of multiple data sets. It also can be applied to the integration of data sets from different platforms and of different sources.

  17. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  18. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  19. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.

    PubMed

    Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F

    2018-03-01

    Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.

  20. Chaotic attractors of relaxation oscillators

    NASA Astrophysics Data System (ADS)

    Guckenheimer, John; Wechselberger, Martin; Young, Lai-Sang

    2006-03-01

    We develop a general technique for proving the existence of chaotic attractors for three-dimensional vector fields with two time scales. Our results connect two important areas of dynamical systems: the theory of chaotic attractors for discrete two-dimensional Henon-like maps and geometric singular perturbation theory. Two-dimensional Henon-like maps are diffeomorphisms that limit on non-invertible one-dimensional maps. Wang and Young formulated hypotheses that suffice to prove the existence of chaotic attractors in these families. Three-dimensional singularly perturbed vector fields have return maps that are also two-dimensional diffeomorphisms limiting on one-dimensional maps. We describe a generic mechanism that produces folds in these return maps and demonstrate that the Wang-Young hypotheses are satisfied. Our analysis requires a careful study of the convergence of the return maps to their singular limits in the Ck topology for k >= 3. The theoretical results are illustrated with a numerical study of a variant of the forced van der Pol oscillator.

  1. Extracting time-frequency feature of single-channel vastus medialis EMG signals for knee exercise pattern recognition.

    PubMed

    Zhang, Yi; Li, Peiyang; Zhu, Xuyang; Su, Steven W; Guo, Qing; Xu, Peng; Yao, Dezhong

    2017-01-01

    The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing), hip extension from a sitting position (sitting) and gait (walking) are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT) based Singular Value Decomposition (SVD) approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM) is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV), Root-Mean-Square (RMS), integrated EMG (iEMG), Zero Crossing (ZC)) and frequency-domain (e.g., Mean Frequency (MNF) and Median Frequency (MDF)) are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85%±0.88% which outperforms other feature models.

  2. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  3. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2017-01-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 s the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 s, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2 per cent) and significantly higher (>20 per cent), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves and tilt noise.

  4. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  5. Asymmetric color image encryption based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping

    2017-02-01

    A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.

  6. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  7. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  8. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  9. Singular value decomposition based impulsive noise reduction in multi-frequency phase-sensitive demodulation of electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Hao, Zhenhua; Cui, Ziqiang; Yue, Shihong; Wang, Huaxiang

    2018-06-01

    As an important means in electrical impedance tomography (EIT), multi-frequency phase-sensitive demodulation (PSD) can be viewed as a matched filter for measurement signals and as an optimal linear filter in the case of Gaussian-type noise. However, the additive noise usually possesses impulsive noise characteristics, so it is a challenging task to reduce the impulsive noise in multi-frequency PSD effectively. In this paper, an approach for impulsive noise reduction in multi-frequency PSD of EIT is presented. Instead of linear filters, a singular value decomposition filter is employed as the pre-stage filtering module prior to PSD, which has advantages of zero phase shift, little distortion, and a high signal-to-noise ratio (SNR) in digital signal processing. Simulation and experimental results demonstrated that the proposed method can effectively eliminate the influence of impulsive noise in multi-frequency PSD, and it was capable of achieving a higher SNR and smaller demodulation error.

  10. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  11. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    PubMed

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  12. Sliding window denoising K-Singular Value Decomposition and its application on rolling bearing impact fault diagnosis

    NASA Astrophysics Data System (ADS)

    Yang, Honggang; Lin, Huibin; Ding, Kang

    2018-05-01

    The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.

  13. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  14. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  15. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  16. Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations

    NASA Astrophysics Data System (ADS)

    Phan, Tuoc

    2017-12-01

    This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.

  17. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  18. Computation of ancestry scores with mixed families and unrelated individuals.

    PubMed

    Zhou, Yi-Hui; Marron, James S; Wright, Fred A

    2018-03-01

    The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  19. Macroscopic theory of dark sector

    NASA Astrophysics Data System (ADS)

    Meierovich, Boris

    A simple Lagrangian with squared covariant divergence of a vector field as a kinetic term turned out an adequate tool for macroscopic description of the dark sector. The zero-mass field acts as the dark energy. Its energy-momentum tensor is a simple additive to the cosmological constant [1]. Space-like and time-like massive vector fields describe two different forms of dark matter. The space-like massive vector field is attractive. It is responsible for the observed plateau in galaxy rotation curves [2]. The time-like massive field displays repulsive elasticity. In balance with dark energy and ordinary matter it provides a four parametric diversity of regular solutions of the Einstein equations describing different possible cosmological and oscillating non-singular scenarios of evolution of the universe [3]. In particular, the singular big bang turns into a regular inflation-like transition from contraction to expansion with the accelerate expansion at late times. The fine-tuned Friedman-Robertson-Walker singular solution corresponds to the particular limiting case at the boundary of existence of regular oscillating solutions in the absence of vector fields. The simplicity of the general covariant expression for the energy-momentum tensor allows to analyse the main properties of the dark sector analytically and avoid unnecessary model assumptions. It opens a possibility to trace how the additional attraction of the space-like dark matter, dominating in the galaxy scale, transforms into the elastic repulsion of the time-like dark matter, dominating in the scale of the Universe. 1. B. E. Meierovich. "Vector fields in multidimensional cosmology". Phys. Rev. D 84, 064037 (2011). 2. B. E. Meierovich. "Galaxy rotation curves driven by massive vector fields: Key to the theory of the dark sector". Phys. Rev. D 87, 103510, (2013). 3. B. E. Meierovich. "Towards the theory of the evolution of the Universe". Phys. Rev. D 85, 123544 (2012).

  20. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  1. Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.C.H.; Lung, H.; Katsumata, Y.

    1995-12-01

    In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.

  2. Light focusing through a multiple scattering medium: ab initio computer simulation

    NASA Astrophysics Data System (ADS)

    Danko, Oleksandr; Danko, Volodymyr; Kovalenko, Andrey

    2018-01-01

    The present study considers ab initio computer simulation of the light focusing through a complex scattering medium. The focusing is performed by shaping the incident light beam in order to obtain a small focused spot on the opposite side of the scattering layer. MSTM software (Auburn University) is used to simulate the propagation of an arbitrary monochromatic Gaussian beam and obtain 2D distribution of the optical field in the selected plane of the investigated volume. Based on the set of incident and scattered fields, the pair of right and left eigen bases and corresponding singular values were calculated. The pair of right and left eigen modes together with the corresponding singular value constitute the transmittance eigen channel of the disordered media. Thus, the scattering process is described in three steps: 1) initial field decomposition in the right eigen basis; 2) scaling of decomposition coefficients for the corresponding singular values; 3) assembling of the scattered field as the composition of the weighted left eigen modes. Basis fields are represented as a linear combination of the original Gaussian beams and scattered fields. It was demonstrated that 60 independent control channels provide focusing the light into a spot with the minimal radius of approximately 0.4 μm at half maximum. The intensity enhancement in the focal plane was equal to 68 that coincided with theoretical prediction.

  3. Argand-plane vorticity singularities in complex scalar optical fields: an experimental study using optical speckle.

    PubMed

    Rothschild, Freda; Bishop, Alexis I; Kitchen, Marcus J; Paganin, David M

    2014-03-24

    The Cornu spiral is, in essence, the image resulting from an Argand-plane map associated with monochromatic complex scalar plane waves diffracting from an infinite edge. Argand-plane maps can be useful in the analysis of more general optical fields. We experimentally study particular features of Argand-plane mappings known as "vorticity singularities" that are associated with mapping continuous single-valued complex scalar speckle fields to the Argand plane. Vorticity singularities possess a hierarchy of Argand-plane catastrophes including the fold, cusp and elliptic umbilic. We also confirm their connection to vortices in two-dimensional complex scalar waves. The study of vorticity singularities may also have implications for higher-dimensional fields such as coherence functions and multi-component fields such as vector and spinor fields.

  4. Compressible Navier-Stokes Equations in a Polyhedral Cylinder with Inflow Boundary Condition

    NASA Astrophysics Data System (ADS)

    Kwon, Ohsung; Kweon, Jae Ryong

    2018-06-01

    In this paper our concern is with singularity and regularity of the compressible flows through a non-convex edge in R^3. The flows are governed by the compressible Navies-Stokes equations on the infinite cylinder that has the non-convex edge on the inflow boundary. We split the edge singularity by the Poisson problem from the velocity vector and show that the remainder is twice differentiable while the edge singularity is observed to be propagated into the interior of the cylinder by the transport character of the continuity equation. An interior surface layer starting at the edge is generated and not Lipshitz continuous due to the singularity. The density function shows a very steep change near the interface and its normal derivative has a jump discontinuity across there.

  5. Asymptotic behavior of dynamical variables and naked singularity formation in spherically symmetric gravitational collapse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawakami, Hayato; Mitsuda, Eiji; Nambu, Yasusada

    In considering the gravitational collapse of matter, it is an important problem to clarify what kind of conditions leads to the formation of naked singularity. For this purpose, we apply the 1+3 orthonormal frame formalism introduced by Uggla et al. to the spherically symmetric gravitational collapse of a perfect fluid. This formalism allows us to construct an autonomous system of evolution and constraint equations for scale-invariant dynamical variables normalized by the volume expansion rate of the timelike orthonormal frame vector. We investigate the asymptotic evolution of such dynamical variables towards the formation of a central singularity and present a conjecturemore » that the steep spatial gradient for the normalized density function is a characteristic of the naked singularity formation.« less

  6. Holographic entanglement entropy in Suzuki-Trotter decomposition of spin systems.

    PubMed

    Matsueda, Hiroaki

    2012-03-01

    In quantum spin chains at criticality, two types of scaling for the entanglement entropy exist: one comes from conformal field theory (CFT), and the other is for entanglement support of matrix product state (MPS) approximation. On the other hand, the quantum spin-chain models can be mapped onto two-dimensional (2D) classical ones by the Suzuki-Trotter decomposition. Motivated by the scaling and the mapping, we introduce information entropy for 2D classical spin configurations as well as a spectrum, and examine their basic properties in the Ising and the three-state Potts models on the square lattice. They are defined by the singular values of the reduced density matrix for a Monte Carlo snapshot. We find scaling relations of the entropy compatible with the CFT and the MPS results. Thus, we propose that the entropy is a kind of "holographic" entanglement entropy. At T(c), the spin configuration is fractal, and various sizes of ordered clusters coexist. Then, the singular values automatically decompose the original snapshot into a set of images with different length scales, respectively. This is the origin of the scaling. In contrast to the MPS scaling, long-range spin correlation can be described by only few singular values. Furthermore, the spectrum, which is a set of logarithms of the singular values, also seems to be a holographic entanglement spectrum. We find multiple gaps in the spectrum, and in contrast to the topological phases, the low-lying levels below the gap represent spontaneous symmetry breaking. These contrasts are strong evidence of the dual nature of the holography. Based on these observations, we discuss the amount of information contained in one snapshot.

  7. Polar decomposition for attitude determination from vector observations

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1993-01-01

    This work treats the problem of weighted least squares fitting of a 3D Euclidean-coordinate transformation matrix to a set of unit vectors measured in the reference and transformed coordinates. A closed-form analytic solution to the problem is re-derived. The fact that the solution is the closest orthogonal matrix to some matrix defined on the measured vectors and their weights is clearly demonstrated. Several known algorithms for computing the analytic closed form solution are considered. An algorithm is discussed which is based on the polar decomposition of matrices into the closest unitary matrix to the decomposed matrix and a Hermitian matrix. A somewhat longer improved algorithm is suggested too. A comparison of several algorithms is carried out using simulated data as well as real data from the Upper Atmosphere Research Satellite. The comparison is based on accuracy and time consumption. It is concluded that the algorithms based on polar decomposition yield a simple although somewhat less accurate solution. The precision of the latter algorithms increase with the number of the measured vectors and with the accuracy of their measurement.

  8. MISR Level 2 TOA/Cloud Versioning

    Atmospheric Science Data Center

    2017-10-11

    ... public release. Add trap singular matrix condition. Add test for invalid look vectors. Use different metadata to test for validity of time tags. Fix incorrectly addressed array. Introduced bug ...

  9. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  10. Sampling strategies based on singular vectors for assimilated models in ocean forecasting systems

    NASA Astrophysics Data System (ADS)

    Fattorini, Maria; Brandini, Carlo; Ortolani, Alberto

    2016-04-01

    Meteorological and oceanographic models do need observations, not only as a ground truth element to verify the quality of the models, but also to keep model forecast error acceptable: through data assimilation techniques which merge measured and modelled data, natural divergence of numerical solutions from reality can be reduced / controlled and a more reliable solution - called analysis - is computed. Although this concept is valid in general, its application, especially in oceanography, raises many problems due to three main reasons: the difficulties that have ocean models in reaching an acceptable state of equilibrium, the high measurements cost and the difficulties in realizing them. The performances of the data assimilation procedures depend on the particular observation networks in use, well beyond the background quality and the used assimilation method. In this study we will present some results concerning the great impact of the dataset configuration, in particular measurements position, on the evaluation of the overall forecasting reliability of an ocean model. The aim consists in identifying operational criteria to support the design of marine observation networks at regional scale. In order to identify the observation network able to minimize the forecast error, a methodology based on Singular Vectors Decomposition of the tangent linear model is proposed. Such a method can give strong indications on the local error dynamics. In addition, for the purpose of avoiding redundancy of information contained in the data, a minimal distance among data positions has been chosen on the base of a spatial correlation analysis of the hydrodynamic fields under investigation. This methodology has been applied for the choice of data positions starting from simplified models, like an ideal double-gyre model and a quasi-geostrophic one. Model configurations and data assimilation are based on available ROMS routines, where a variational assimilation algorithm (4D-var) is included as part of the code These first applications have provided encouraging results in terms of increased predictability time and reduced forecast error, also improving the quality of the analysis used to recover the real circulation patterns from a first guess quite far from the real state.

  11. Efficient morse decompositions of vector fields.

    PubMed

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  12. Decomposition-aggregation stability analysis. [for large scale dynamic systems with application to spinning Skylab control system

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Weissenberger, S.; Cuk, S. M.

    1973-01-01

    This report presents the development and description of the decomposition aggregation approach to stability investigations of high dimension mathematical models of dynamic systems. The high dimension vector differential equation describing a large dynamic system is decomposed into a number of lower dimension vector differential equations which represent interconnected subsystems. Then a method is described by which the stability properties of each subsystem are aggregated into a single vector Liapunov function, representing the aggregate system model, consisting of subsystem Liapunov functions as components. A linear vector differential inequality is then formed in terms of the vector Liapunov function. The matrix of the model, which reflects the stability properties of the subsystems and the nature of their interconnections, is analyzed to conclude over-all system stability characteristics. The technique is applied in detail to investigate the stability characteristics of a dynamic model of a hypothetical spinning Skylab.

  13. The continuous spin representations of the Poincare and super-Poincare groups and their construction by the Inonu-Wigner group contraction

    NASA Astrophysics Data System (ADS)

    Khan, Abu M. A. S.

    We study the continuous spin representation (CSR) of the Poincare group in arbitrary dimensions. In d dimensions, the CSRs are characterized by the length of the light-cone vector and the Dynkin labels of the SO(d-3) short little group which leaves the light-cone vector invariant. In addition to these, a solid angle Od-3 which specifies the direction of the light-cone vector is also required to label the states. We also find supersymmetric generalizations of the CSRs. In four dimensions, the supermultiplet contains one bosonic and one fermionic CSRs which transform into each other under the action of the supercharges. In a five dimensional case, the supermultiplet contains two bosonic and two fermionic CSRs which is like N = 2 supersymmetry in four dimensions. When constructed using Grassmann parameters, the light-cone vector becomes nilpotent. This makes the representation finite dimensional, but at the expense of introducing central charges even though the representation is massless. This leads to zero or negative norm states. The nilpotent constructions are valid only for even dimensions. We also show how the CSRs in four dimensions can be obtained from five dimensions by the combinations of Kaluza-Klein (KK) dimensional reduction and the Inonu-Wigner group contraction. The group contraction is a singular transformation. We show that the group contraction is equivalent to imposing periodic boundary condition along one direction and taking a double singular limit. In this form the contraction parameter is interpreted as the inverse KK radius. We apply this technique to both five dimensional regular massless and massive representations. For the regular massless case, we find that the contraction gives the CSR in four dimensions under a double singular limit and the representation wavefunction is the Bessel function. For the massive case, we use Majorana's infinite component theory as a model for the SO(4) little group. In this case, a triple singular limit is required to yield any CSR in four dimensions. The representation wavefunction is the Bessel function, as expected, but the scale factor is not the length of the light-cone vector. The amplitude and the scale factor are implicit functions of the parameter y which is a ratio of the internal and external coordinates. We also state under what conditions our solutions become identical to Wigner's solution.

  14. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns

    PubMed Central

    Haghnegahdar, A.A.; Kolahi, S.; Khojastepour, L.; Tajeripour, F.

    2018-01-01

    Background: Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. Material and Methods: CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. Results: K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. Conclusion: We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages. PMID:29732343

  15. Electric and magnetic polarization singularities of first-order Laguerre-Gaussian beams diffracted at a half-plane screen.

    PubMed

    Luo, Yamei; Gao, Zenghui; Tang, Bihua; Lü, Baida

    2013-08-01

    Based on the vector Fresnel diffraction integrals, analytical expressions for the electric and magnetic components of first-order Laguerre-Gaussian beams diffracted at a half-plane screen are derived and used to study the electric and magnetic polarization singularities in the diffraction field for both two- and three-dimensional (2D and 3D) cases. It is shown that there exist 2D and 3D electric and magnetic polarization singularities in the diffraction field, which do not coincide each other in general. By suitably varying the waist width ratio, off-axis displacement parameter, amplitude ratio, or propagation distance, the motion, pair-creation, and annihilation of circular polarization singularities, and the motion of linear polarization singularities take place in 2D and 3D electric and magnetic fields. The V point, at which two circular polarization singularities with the same topological charge but opposite handedness collide, appears in the 2D electric field under certain conditions in the diffraction field and free-space propagation. A comparison with the free-space propagation is also made.

  16. Fully pseudospectral solution of the conformally invariant wave equation near the cylinder at spacelike infinity. III: nonspherical Schwarzschild waves and singularities at null infinity

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Hennig, Jörg

    2018-03-01

    We extend earlier numerical and analytical considerations of the conformally invariant wave equation on a Schwarzschild background from the case of spherically symmetric solutions, discussed in Frauendiener and Hennig (2017 Class. Quantum Grav. 34 045005), to the case of general, nonsymmetric solutions. A key element of our approach is the modern standard representation of spacelike infinity as a cylinder. With a decomposition into spherical harmonics, we reduce the four-dimensional wave equation to a family of two-dimensional equations. These equations can be used to study the behaviour at the cylinder, where the solutions turn out to have, in general, logarithmic singularities at infinitely many orders. We derive regularity conditions that may be imposed on the initial data, in order to avoid the first singular terms. We then demonstrate that the fully pseudospectral time evolution scheme can be applied to this problem leading to a highly accurate numerical reconstruction of the nonsymmetric solutions. We are particularly interested in the behaviour of the solutions at future null infinity, and we numerically show that the singularities spread to null infinity from the critical set, where the cylinder approaches null infinity. The observed numerical behaviour is consistent with similar logarithmic singularities found analytically on the critical set. Finally, we demonstrate that even solutions with singularities at low orders can be obtained with high accuracy by virtue of a coordinate transformation that converts solutions with logarithmic singularities into smooth solutions.

  17. Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions

    DTIC Science & Technology

    2012-03-22

    with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems

  18. The Rigid Orthogonal Procrustes Rotation Problem

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    2006-01-01

    The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…

  19. Constraint elimination in dynamical systems

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  20. Data Mining in Earth System Science (DMESS 2011)

    Treesearch

    Forrest M. Hoffman; J. Walter Larson; Richard Tran Mills; Bhorn-Gustaf Brooks; Auroop R. Ganguly; William Hargrove; et al

    2011-01-01

    From field-scale measurements to global climate simulations and remote sensing, the growing body of very large and long time series Earth science data are increasingly difficult to analyze, visualize, and interpret. Data mining, information theoretic, and machine learning techniques—such as cluster analysis, singular value decomposition, block entropy, Fourier and...

  1. An asymptotic induced numerical method for the convection-diffusion-reaction equation

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.; Sorensen, Danny C.

    1988-01-01

    A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.

  2. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  3. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  4. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  5. Characterization of an elastic target in a shallow water waveguide by decomposition of the time-reversal operator.

    PubMed

    Philippe, Franck D; Prada, Claire; de Rosny, Julien; Clorennec, Dominique; Minonzio, Jean-Gabriel; Fink, Mathias

    2008-08-01

    This paper reports the results of an investigation into extracting of the backscattered frequency signature of a target in a waveguide. Retrieving the target signature is difficult because it is blurred by waveguide reflections and modal interference. It is shown that the decomposition of the time-reversal operator method provides a solution to this problem. Using a modal theory, this paper shows that the first singular value associated with a target is proportional to the backscattering form function. It is linked to the waveguide geometry through a factor that weakly depends on frequency as long as the target is far from the boundaries. Using the same approach, the second singular value is shown to be proportional to the second derivative of the angular form function which is a relevant parameter for target identification. Within this framework the coupling between two targets is considered. Small scale experimental studies are performed in the 3.5 MHz frequency range for 3 mm spheres in a 28 mm deep and 570 mm long waveguide and confirm the theoretical results.

  6. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  7. Application of generalized singular value decomposition to ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Bhuyan, K.; Singh, S.; Bhuyan, P.

    2004-10-01

    The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD) based algorithm. Model ionospheric total electron content (TEC) data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA), in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.

  8. Shortened Mean Transit Time in CT Perfusion With Singular Value Decomposition Analysis in Acute Cerebral Infarction: Quantitative Evaluation and Comparison With Various CT Perfusion Parameters.

    PubMed

    Murayama, Kazuhiro; Katada, Kazuhiro; Hayakawa, Motoharu; Toyama, Hiroshi

    We aimed to clarify the cause of shortened mean transit time (MTT) in acute ischemic cerebrovascular disease and examined its relationship with reperfusion. Twenty-three patients with acute ischemic cerebrovascular disease underwent whole-brain computed tomography perfusion (CTP). The maximum MTT (MTTmax), minimum MTT (MTTmin), ratio of maximum and minimum MTT (MTTmin/max), and minimum cerebral blood volume (CBV) (CBVmin) were measured by automatic region of interest analysis. Diffusion weighted image was performed to calculate infarction volume. We compared these CTP parameters between reperfusion and nonreperfusion groups and calculated correlation coefficients between the infarction core volume and CTP parameters. Significant differences were observed between reperfusion and nonreperfusion groups (MTTmin/max: P = 0.014; CBVmin ratio: P = 0.038). Regression analysis of CTP and high-intensity volume on diffusion weighted image showed negative correlation (CBVmin ratio: r = -0.41; MTTmin/max: r = -0.30; MTTmin ratio: r = -0.27). A region of shortened MTT indicated obstructed blood flow, which was attributed to the singular value decomposition method error.

  9. Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction

    NASA Astrophysics Data System (ADS)

    Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.

    2002-11-01

    The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.

  10. Tensor gauge condition and tensor field decomposition

    NASA Astrophysics Data System (ADS)

    Zhu, Ben-Chao; Chen, Xiang-Song

    2015-10-01

    We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin ≥ 2. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.

  11. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  12. Decomposition of a symmetric second-order tensor

    NASA Astrophysics Data System (ADS)

    Heras, José A.

    2018-05-01

    In the three-dimensional space there are different definitions for the dot and cross products of a vector with a second-order tensor. In this paper we show how these products can uniquely be defined for the case of symmetric tensors. We then decompose a symmetric second-order tensor into its ‘dot’ part, which involves the dot product, and the ‘cross’ part, which involves the cross product. For some physical applications, this decomposition can be interpreted as one in which the dot part identifies with the ‘parallel’ part of the tensor and the cross part identifies with the ‘perpendicular’ part. This decomposition of a symmetric second-order tensor may be suitable for undergraduate courses of vector calculus, mechanics and electrodynamics.

  13. A Cartesian parametrization for the numerical analysis of material instability

    DOE PAGES

    Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...

    2016-02-25

    We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less

  14. A Cartesian parametrization for the numerical analysis of material instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.

    We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less

  15. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  16. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  17. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  18. Prediction of monthly-seasonal precipitation using coupled SVD patterns between soil moisture and subsequent precipitation

    Treesearch

    Yongqiang Liu

    2003-01-01

    It was suggested in a recent statistical correlation analysis that predictability of monthly-seasonal precipitation could be improved by using coupled singular value decomposition (SVD) pattems between soil moisture and precipitation instead of their values at individual locations. This study provides predictive evidence for this suggestion by comparing skills of two...

  19. North Pacific warming and intense northwestern U.S. wildfires

    Treesearch

    Yongqiang Liu

    2006-01-01

    The tropical Pacific sea surface temperature (SST) anomalies such as La Nina have been an important predictor for wildfires in the southeastern and southwestern U.S. This study seeks seasonal predictors for wildfires in the northwestern U.S., a region with the most intense wildfires among various continental U.S. regions. Singular value decomposition and regression...

  20. Implicit-shifted Symmetric QR Singular Value Decomposition of 3x3 Matrices

    DTIC Science & Technology

    2016-04-01

    Graph 33, 4, 138:1– 138:11. TREFETHEN, L. N., AND BAU III, D. 1997. Numerical linear algebra , vol. 50. Siam. XU, H., SIN, F., ZHU, Y., AND BARBIČ, J...matrices with minimal branching and elementary floating point operations. Tech. rep., University of Wisconsin- Madison. SAITO, S., ZHOU, Z.-Y., AND

  1. Spatial patterns of soil moisture connected to monthly-seasonal precipitation variability in a monsoon region

    Treesearch

    Yongqiang Liu

    2003-01-01

    The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...

  2. Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data

    NASA Astrophysics Data System (ADS)

    Mora, P.; Spies, M.

    2018-05-01

    We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.

  3. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  4. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    PubMed

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  5. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    PubMed Central

    Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  6. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  7. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  8. Factor Analytic Approach to Transitive Text Mining using Medline Descriptors

    NASA Astrophysics Data System (ADS)

    Stegmann, J.; Grohmann, G.

    Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.

  9. A technique for plasma velocity-space cross-correlation

    NASA Astrophysics Data System (ADS)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  10. Scalar/Vector potential formulation for compressible viscous unsteady flows

    NASA Technical Reports Server (NTRS)

    Morino, L.

    1985-01-01

    A scalar/vector potential formulation for unsteady viscous compressible flows is presented. The scalar/vector potential formulation is based on the classical Helmholtz decomposition of any vector field into the sum of an irrotational and a solenoidal field. The formulation is derived from fundamental principles of mechanics and thermodynamics. The governing equations for the scalar potential and vector potential are obtained, without restrictive assumptions on either the equation of state or the constitutive relations or the stress tensor and the heat flux vector.

  11. Fast higher-order MR image reconstruction using singular-vector separation.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2012-07-01

    Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.

  12. Stochastic species abundance models involving special copulas

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2018-01-01

    Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.

  13. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  14. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  15. Regularity results for the minimum time function with Hörmander vector fields

    NASA Astrophysics Data System (ADS)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  16. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  17. Active listening room compensation for massive multichannel sound reproduction systems using wave-domain adaptive filtering.

    PubMed

    Spors, Sascha; Buchner, Herbert; Rabenstein, Rudolf; Herbordt, Wolfgang

    2007-07-01

    The acoustic theory for multichannel sound reproduction systems usually assumes free-field conditions for the listening environment. However, their performance in real-world listening environments may be impaired by reflections at the walls. This impairment can be reduced by suitable compensation measures. For systems with many channels, active compensation is an option, since the compensating waves can be created by the reproduction loudspeakers. Due to the time-varying nature of room acoustics, the compensation signals have to be determined by an adaptive system. The problems associated with the successful operation of multichannel adaptive systems are addressed in this contribution. First, a method for decoupling the adaptation problem is introduced. It is based on a generalized singular value decomposition and is called eigenspace adaptive filtering. Unfortunately, it cannot be implemented in its pure form, since the continuous adaptation of the generalized singular value decomposition matrices to the variable room acoustics is numerically very demanding. However, a combination of this mathematical technique with the physical description of wave propagation yields a realizable multichannel adaptation method with good decoupling properties. It is called wave domain adaptive filtering and is discussed here in the context of wave field synthesis.

  18. Retrieval of Enterobacteriaceae drug targets using singular value decomposition.

    PubMed

    Silvério-Machado, Rita; Couto, Bráulio R G M; Dos Santos, Marcos A

    2015-04-15

    The identification of potential drug target proteins in bacteria is important in pharmaceutical research for the development of new antibiotics to combat bacterial agents that cause diseases. A new model that combines the singular value decomposition (SVD) technique with biological filters composed of a set of protein properties associated with bacterial drug targets and similarity to protein-coding essential genes of Escherichia coli (strain K12) has been created to predict potential antibiotic drug targets in the Enterobacteriaceae family. This model identified 99 potential drug target proteins in the studied family, which exhibit eight different functions and are protein-coding essential genes or similar to protein-coding essential genes of E.coli (strain K12), indicating that the disruption of the activities of these proteins is critical for cells. Proteins from bacteria with described drug resistance were found among the retrieved candidates. These candidates have no similarity to the human proteome, therefore exhibiting the advantage of causing no adverse effects or at least no known adverse effects on humans. rita_silverio@hotmail.com. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A NEW GUI FOR GLOBAL ORBIT CORRECTION AT THE ALS USING MATLAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pachikara, J.; Portmann, G.

    2007-01-01

    Orbit correction is a vital procedure at particle accelerators around the world. The orbit correction routine currently used at the Advanced Light Source (ALS) is a bit cumbersome and a new Graphical User Interface (GUI) has been developed using MATLAB. The correction algorithm uses a singular value decomposition method for calculating the required corrector magnet changes for correcting the orbit. The application has been successfully tested at the ALS. The GUI display provided important information regarding the orbit including the orbit errors before and after correction, the amount of corrector magnet strength change, and the standard deviation of the orbitmore » error with respect to the number of singular values used. The use of more singular values resulted in better correction of the orbit error but at the expense of enormous corrector magnet strength changes. The results showed an inverse relationship between the peak-to-peak values of the orbit error and the number of singular values used. The GUI interface helps the ALS physicists and operators understand the specifi c behavior of the orbit. The application is convenient to use and is a substantial improvement over the previous orbit correction routine in terms of user friendliness and compactness.« less

  20. FAST TRACK COMMUNICATION: Singularity theorems based on trapped submanifolds of arbitrary co-dimension

    NASA Astrophysics Data System (ADS)

    Galloway, Gregory J.; Senovilla, José M. M.

    2010-08-01

    Standard singularity theorems are proven in Lorentzian manifolds of arbitrary dimension n if they contain closed trapped submanifolds of arbitrary co-dimension. By using the mean curvature vector to characterize trapped submanifolds, a unification of the several possibilities for the boundary conditions in the traditional theorems and their generalization to an arbitrary co-dimension is achieved. The classical convergence conditions must be replaced by a condition on sectional curvatures, or tidal forces, which reduces to the former in the cases of the co-dimension 1, 2 or n.

  1. Singularity-free spinors in gravity with propagating torsion

    NASA Astrophysics Data System (ADS)

    Fabbri, Luca

    2017-12-01

    We consider the most general renormalizable theory of propagating torsion in Einstein gravity for the Dirac matter distribution and we demonstrate that in this case, torsion is a massive axial-vector field whose coupling to the spinor gives rise to conditions in terms of which gravitational singularities are not bound to form; we discuss how our results improve those that are presented in the existing literature, and that no further improvement can be achieved unless one is ready to re-evaluate some considerations on the renormalizability of the theory.

  2. Embedding Dimension Selection for Adaptive Singular Spectrum Analysis of EEG Signal.

    PubMed

    Xu, Shanzhi; Hu, Hai; Ji, Linhong; Wang, Peng

    2018-02-26

    The recorded electroencephalography (EEG) signal is often contaminated with different kinds of artifacts and noise. Singular spectrum analysis (SSA) is a powerful tool for extracting the brain rhythm from a noisy EEG signal. By analyzing the frequency characteristics of the reconstructed component (RC) and the change rate in the trace of the Toeplitz matrix, it is demonstrated that the embedding dimension is related to the frequency bandwidth of each reconstructed component, in consistence with the component mixing in the singular value decomposition step. A method for selecting the embedding dimension is thereby proposed and verified by simulated EEG signal based on the Markov Process Amplitude (MPA) EEG Model. Real EEG signal is also collected from the experimental subjects under both eyes-open and eyes-closed conditions. The experimental results show that based on the embedding dimension selection method, the alpha rhythm can be extracted from the real EEG signal by the adaptive SSA, which can be effectively utilized to distinguish between the eyes-open and eyes-closed states.

  3. Singular value decomposition utilizing parallel algorithms on graphical processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less

  4. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  5. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  6. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  7. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  8. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  9. Spreading Sequence System for Full Connectivity Relay Network

    NASA Technical Reports Server (NTRS)

    Kwon, Hyuck M. (Inventor); Pham, Khanh D. (Inventor); Yang, Jie (Inventor)

    2018-01-01

    Fully connected uplink and downlink fully connected relay network systems using pseudo-noise spreading and despreading sequences subjected to maximizing the signal-to-interference-plus-noise ratio. The relay network systems comprise one or more transmitting units, relays, and receiving units connected via a communication network. The transmitting units, relays, and receiving units each may include a computer for performing the methods and steps described herein and transceivers for transmitting and/or receiving signals. The computer encodes and/or decodes communication signals via optimum adaptive PN sequences found by employing Cholesky decompositions and singular value decompositions (SVD). The PN sequences employ channel state information (CSI) to more effectively and more securely computing the optimal sequences.

  10. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  11. Multiphase wavetrains, singular wave interactions and the emergence of the Korteweg–de Vries equation

    PubMed Central

    Bridges, Thomas J.

    2016-01-01

    Multiphase wavetrains are multiperiodic travelling waves with a set of distinct wavenumbers and distinct frequencies. In conservative systems, such families are associated with the conservation of wave action or other conservation law. At generic points (where the Jacobian of the wave action flux is non-degenerate), modulation of the wavetrain leads to the dispersionless multiphase conservation of wave action. The main result of this paper is that modulation of the multiphase wavetrain, when the Jacobian of the wave action flux vector is singular, morphs the vector-valued conservation law into the scalar Korteweg–de Vries (KdV) equation. The coefficients in the emergent KdV equation have a geometrical interpretation in terms of projection of the vector components of the conservation law. The theory herein is restricted to two phases to simplify presentation, with extensions to any finite dimension discussed in the concluding remarks. Two applications of the theory are presented: a coupled nonlinear Schrödinger equation and two-layer shallow-water hydrodynamics with a free surface. Both have two-phase solutions where criticality and the properties of the emergent KdV equation can be determined analytically. PMID:28119546

  12. Changing image of correlation optics: introduction.

    PubMed

    Angelsky, Oleg V; Desyatnikov, Anton S; Gbur, Gregory J; Hanson, Steen G; Lee, Tim; Miyamoto, Yoko; Schneckenburger, Herbert; Wyant, James C

    2016-04-20

    This feature issue of Applied Optics contains a series of selected papers reflecting recent progress of correlation optics and illustrating current trends in vector singular optics, internal energy flows at light fields, optical science of materials, and new biomedical applications of lasers.

  13. Application of Bred Vectors To Data Assimilation

    NASA Astrophysics Data System (ADS)

    Corazza, M.; Kalnay, E.; Patil, Dj

    We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 1835­1851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3

  14. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  15. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  16. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  17. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  18. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  19. Gibbsian Stationary Non-equilibrium States

    NASA Astrophysics Data System (ADS)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-09-01

    We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.

  20. Generation and dynamics of optical beams with polarization singularities.

    PubMed

    Cardano, Filippo; Karimi, Ebrahim; Marrucci, Lorenzo; de Lisio, Corrado; Santamato, Enrico

    2013-04-08

    We present a convenient method to generate vector beams of light having polarization singularities on their axis, via partial spin-to-orbital angular momentum conversion in a suitably patterned liquid crystal cell. The resulting polarization patterns exhibit a C-point on the beam axis and an L-line loop around it, and may have different geometrical structures such as "lemon", "star", and "spiral". Our generation method allows us to control the radius of L-line loop around the central C-point. Moreover, we investigate the free-air propagation of these fields across a Rayleigh range.

  1. Compacted dimensions and singular plasmonic surfaces

    NASA Astrophysics Data System (ADS)

    Pendry, J. B.; Huidobro, Paloma Arroyo; Luo, Yu; Galiffi, Emanuele

    2017-11-01

    In advanced field theories, there can be more than four dimensions to space, the excess dimensions described as compacted and unobservable on everyday length scales. We report a simple model, unconnected to field theory, for a compacted dimension realized in a metallic metasurface periodically structured in the form of a grating comprising a series of singularities. An extra dimension of the grating is hidden, and the surface plasmon excitations, though localized at the surface, are characterized by three wave vectors rather than the two of typical two-dimensional metal grating. We propose an experimental realization in a doped graphene layer.

  2. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  3. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  4. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  5. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  6. Dynamics in the Decompositions Approach to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  7. Primer Vector Optimization: Survey of Theory, new Analysis and Applications

    NASA Astrophysics Data System (ADS)

    Guzman

    This paper presents a preliminary study in developing a set of optimization tools for orbit rendezvous, transfer and station keeping. This work is part of a large scale effort undergoing at NASA Goddard Space Flight Center and a.i. solutions, Inc. to build generic methods, which will enable missions with tight fuel budgets. Since no single optimization technique can solve efficiently all existing problems, a library of tools where the user could pick the method most suited for the particular mission is envisioned. The first trajectory optimization technique explored is Lawden's primer vector theory [Ref. 1]. Primer vector theory can be considered as a byproduct of applying Calculus of Variations (COV) techniques to the problem of minimizing the fuel usage of impulsive trajectories. For an n-impulse trajectory, it involves the solution of n-1 two-point boundary value problems. In this paper, we look at some of the different formulations of the primer vector (dependent on the frame employed and on the force model). Also, the applicability of primer vector theory is examined in effort to understand when and why the theory can fail. Specifically, since COV is based on "small variations", singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyzes that employ "small variations" [Refs. 2, 3]. For example, singularities in the (2-body problem) variational equations along elliptic arcs occur when [Ref. 4], 1) the difference between the initial and final times is a multiple of the reference orbit period, 2) the difference between the initial and final true anomalies are given by k, for k= 0, 1, 2, 3,..., note that this cover the 3) the time of flight is a minimum for the given difference in true anomaly. For the N-body problem, the situation is more complex and is still under investigation. Several examples, such as the initialization of an orbit (ascent trajectory) and rotation of the line of apsides, are utilized as test cases. Recommendations, future work, and the possible addition of other optimization techniques are also discussed. References: [1] Lawden D.F., Optimal Trajectories for Space Navigation, Butterworths, London, 1963. [2] Wilson, R.S., Howell, K.C., and, Lo, M, "Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits", AIAA/AAS Astrodynamics Specialist Conference, AAS 99-041, Girdwood, Alaska, August 16-19, 1999. [3] Goodson, T, "Monte-Carlo Maneuver Analysis for the Microwave Anisotropy Probe", AAS/AIAA Astrodynamics Specialist Conference, AAS 01-331, Quebec City, Canada, July 30 - August 2, 2001. [4] Stern, R.G., "Singularities in the Analytic Solution of the Linearized Variational Equations of Elliptical Motion", Report RE-8, May 1964, Experimental Astronomy Lab., Massachusetts Institute of Technology, Cambridge, Massachusetts.

  8. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  9. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  10. A Comparison of Nonlinear Filters for Orbit Determination and Estimation

    DTIC Science & Technology

    1986-06-01

    Com- mand uses a nonlinear least squares filter for element set maintenance for all objects orbiting the Earth (3). These objects, including active...initial state vector is the singularly averaged classical orbital element set provided by SPACECOM/DOA. The state vector in this research consists of...GSF (G) - - 26.0 36.7 GSF(A) 32.1 77.4 38.8 59.6 The Air Force Space Command is responsible for main- taining current orbital element sets for about

  11. Automated Change Detection for Synthetic Aperture Sonar

    DTIC Science & Technology

    2014-01-01

    channels, respectively. The canonical coordinates of x and y are defined as u = FHR−1/2xx x v = GHR−1/2yy y where F and G are the mapping matrices...containing the left and right singular vectors of the coherence matrix C, respectively. The canonical coordinate vectors u and v share the diagonal cross...feature set. The coherent change information between canonical coordinates v and u can be calculated using the residual, v −Ku, owing to the fact that

  12. X-ray edge singularity in resonant inelastic x-ray scattering (RIXS)

    NASA Astrophysics Data System (ADS)

    Markiewicz, Robert; Rehr, John; Bansil, Arun

    2013-03-01

    We develop a lattice model based on the theory of Mahan, Noziéres, and de Dominicis for x-ray absorption to explore the effect of the core hole on the RIXS cross section. The dominant part of the spectrum can be described in terms of the dynamic structure function S (q , ω) dressed by matrix element effects, but there is also a weak background associated with multi-electron-hole pair excitations. The model reproduces the decomposition of the RIXS spectrum into well- and poorly-screened components. An edge singularity arises at the threshold of both components. Fairly large lattice sizes are required to describe the continuum limit. Supported by DOE Grant DE-FG02-07ER46352 and facilitated by the DOE CMCSN, under grant number DE-SC0007091.

  13. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  14. Vector and axial-vector decomposition of Einstein's gravitational action

    NASA Astrophysics Data System (ADS)

    Soh, Kwang S.

    1991-08-01

    Vector and axial-vector gravitational fields are introduced to express the Einstein action in the manner of electromagnetism. Their conformal scaling properties are examined, and the resemblance between the general coordinate and electromagnetic gauge transformation is elucidated. The chiral formulation of the gravitational action is constructed. I am deeply grateful to Professor S. Hawking, and Professor G. Lloyd for warm hospitality at DAMTP, and Darwin College, University of Cambridge, respectively. I also appreciate much help received from Dr. Q.-H. Park.

  15. Network Monitoring Traffic Compression Using Singular Value Decomposition

    DTIC Science & Technology

    2014-03-27

    Shootouts." Workshop on Intrusion Detection and Network Monitoring. 1999. [12] Goodall , John R. "Visualization is better! a comparative evaluation...34 Visualization for Cyber Security, 2009. VizSec 2009. 6th International Workshop on IEEE, 2009. [13] Goodall , John R., and Mark Sowul. "VIAssist...Viruses and Log Visualization.” In Australian Digital Forensics Conference. Paper 54, 2008. [30] Tesone, Daniel R., and John R. Goodall . "Balancing

  16. Oxygen Measurements in Liposome Encapsulated Hemoglobin

    NASA Astrophysics Data System (ADS)

    Phiri, Joshua Benjamin

    Liposome encapsulated hemoglobins (LEH's) are of current interest as blood substitutes. An analytical methodology for rapid non-invasive measurements of oxygen in artificial oxygen carriers is examined. High resolution optical absorption spectra are calculated by means of a one dimensional diffusion approximation. The encapsulated hemoglobin is prepared from fresh defibrinated bovine blood. Liposomes are prepared from hydrogenated soy phosphatidylcholine (HSPC), cholesterol and dicetylphosphate using a bath sonication method. An integrating sphere spectrophotometer is employed for diffuse optics measurements. Data is collected using an automated data acquisition system employing lock-in -amplifiers. The concentrations of hemoglobin derivatives are evaluated from the corresponding extinction coefficients using a numerical technique of singular value decomposition, and verification of the results is done using Monte Carlo simulations. In situ measurements are required for the determination of hemoglobin derivatives because most encapsulation methods invariably lead to the formation of methemoglobin, a nonfunctional form of hemoglobin. The methods employed in this work lead to high resolution absorption spectra of oxyhemoglobin and other derivatives in red blood cells and liposome encapsulated hemoglobin (LEH). The analysis using singular value decomposition method offers a quantitative means of calculating the fractions of oxyhemoglobin and other hemoglobin derivatives in LEH samples. The analytical methods developed in this work will become even more useful when production of LEH as a blood substitute is scaled up to large volumes.

  17. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  18. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  19. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  20. Application of a sparse representation method using K-SVD to data compression of experimental ambient vibration data for SHM

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Kiremidjian, Anne S.

    2011-04-01

    This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.

  1. Radiative albedo from a linearly fibered half-space

    NASA Astrophysics Data System (ADS)

    Grzesik, J. A.

    2018-05-01

    A growing acceptance of fiber-reinforced composite materials imparts some relevance to exploring the effects which a predominantly linear scattering lattice may have upon interior radiative transport. Indeed, a central feature of electromagnetic wave propagation within such a lattice, if sufficiently dilute, is ray confinement to cones whose half-angles are set by that between lattice and the incident ray. When such propagation is subordinated to a viewpoint of an unpolarized intensity transport, one arrives at a somewhat simplified variant of the Boltzmann equation with spherical scattering demoted to its cylindrical counterpart. With a view to initiating a hopefully wider discussion of such phenomena, we follow through in detail the half-space albedo problem. This is done first along canonical lines that harness the Wiener-Hopf technique, and then once more in a discrete ordinates setting via flux decomposition along the eigenbasis of the underlying attenuation/scattering matrix. Good agreement is seen to prevail. We further suggest that the Case singular eigenfunction apparatus could likewise be evolved here in close analogy to its original, spherical scattering model. A cursory contact with related problems in the astrophysical literature suggests, in addition, that the basic physical fidelity of our scalar radiative transfer equation (RTE) remains open to improvement by passage to a (4×1) Stokes vector, (4×4) matricial setting.

  2. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  3. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  4. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  5. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  6. SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Kumagai, S; Nakabayashi, S

    Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less

  7. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

  8. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  9. Ensemble Feature Learning of Genomic Data Using Support Vector Machine

    PubMed Central

    Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.

    2016-01-01

    The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923

  10. Normal forms for Hopf-Zero singularities with nonconservative nonlinear part

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.

    In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.

  11. On bipartite pure-state entanglement structure in terms of disentanglement

    NASA Astrophysics Data System (ADS)

    Herbut, Fedor

    2006-12-01

    Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.

  12. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  14. Partial regularity of weak solutions to a PDE system with cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Xu, Xiangsheng

    2018-04-01

    In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.

  15. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  16. Absorption spectrum analysis based on singular value decomposition for photoisomerization and photodegradation in organic dyes

    NASA Astrophysics Data System (ADS)

    Kawabe, Yutaka; Yoshikawa, Toshio; Chida, Toshifumi; Tada, Kazuhiro; Kawamoto, Masuki; Fujihara, Takashi; Sassa, Takafumi; Tsutsumi, Naoto

    2015-10-01

    In order to analyze the spectra of inseparable chemical mixtures, many mathematical methods have been developed to decompose them into the components relevant to species from series of spectral data obtained under different conditions. We formulated a method based on singular value decomposition (SVD) of linear algebra, and applied it to two example systems of organic dyes, being successful in reproducing absorption spectra assignable to cis/trans azocarbazole dyes from the spectral data after photoisomerization and to monomer/dimer of cyanine dyes from those during photodegaradation process. For the example of photoisomerization, polymer films containing the azocarbazole dyes were prepared, which have showed updatable holographic stereogram for real images with high performance. We made continuous monitoring of absorption spectrum after optical excitation and found that their spectral shapes varied slightly after the excitation and during recovery process, of which fact suggested the contribution from a generated photoisomer. Application of the method was successful to identify two spectral components due to trans and cis forms of azocarbazoles. Temporal evolution of their weight factors suggested important roles of long lifetimed cis states in azocarbazole derivatives. We also applied the method to the photodegradation of cyanine dyes doped in DNA-lipid complexes which have shown efficient and durable optical amplification and/or lasing under optical pumping. The same SVD method was successful in the extraction of two spectral components presumably due to monomer and H-type dimer. During the photodegradation process, absorption magnitude gradually decreased due to decomposition of molecules and their decaying rates strongly depended on the spectral components, suggesting that the long persistency of the dyes in DNA-complex related to weak tendency of aggregate formation.

  17. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  18. The generic unfolding of a codimension-two connection to a two-fold singularity of planar Filippov systems

    NASA Astrophysics Data System (ADS)

    Novaes, Douglas D.; Teixeira, Marco A.; Zeli, Iris O.

    2018-05-01

    Generic bifurcation theory was classically well developed for smooth differential systems, establishing results for k-parameter families of planar vector fields. In the present study we focus on a qualitative analysis of 2-parameter families, , of planar Filippov systems assuming that Z 0,0 presents a codimension-two minimal set. Such object, named elementary simple two-fold cycle, is characterized by a regular trajectory connecting a visible two-fold singularity to itself, for which the second derivative of the first return map is nonvanishing. We analyzed the codimension-two scenario through the exhibition of its bifurcation diagram.

  19. Moving singularity creep crack growth analysis with the /Delta T/c and C/asterisk/ integrals. [path-independent vector and energy rate line integrals

    NASA Technical Reports Server (NTRS)

    Stonesifer, R. B.; Atluri, S. N.

    1982-01-01

    The physical meaning of (Delta T)c and its applicability to creep crack growth are reviewed. Numerical evaluation of (Delta T)c and C(asterisk) is discussed with results being given for compact specimen and strip geometries. A moving crack-tip singularity, creep crack growth simulation procedure is described and demonstrated. The results of several crack growth simulation analyses indicate that creep crack growth in 304 stainless steel occurs under essentially steady-state conditions. Based on this result, a simple methodology for predicting creep crack growth behavior is summarized.

  20. ATTITUDE FILTERING ON SO(3)

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2005-01-01

    A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.

  1. Compacted dimensions and singular plasmonic surfaces.

    PubMed

    Pendry, J B; Huidobro, Paloma Arroyo; Luo, Yu; Galiffi, Emanuele

    2017-11-17

    In advanced field theories, there can be more than four dimensions to space, the excess dimensions described as compacted and unobservable on everyday length scales. We report a simple model, unconnected to field theory, for a compacted dimension realized in a metallic metasurface periodically structured in the form of a grating comprising a series of singularities. An extra dimension of the grating is hidden, and the surface plasmon excitations, though localized at the surface, are characterized by three wave vectors rather than the two of typical two-dimensional metal grating. We propose an experimental realization in a doped graphene layer. Copyright © 2017, American Association for the Advancement of Science.

  2. Numerical linear algebra in data mining

    NASA Astrophysics Data System (ADS)

    Eldén, Lars

    Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.

  3. A Direct and Non-Singular UKF Approach Using Euler Angle Kinematics for Integrated Navigation Systems

    PubMed Central

    Ran, Changyan; Cheng, Xianghong

    2016-01-01

    This paper presents a direct and non-singular approach based on an unscented Kalman filter (UKF) for the integration of strapdown inertial navigation systems (SINSs) with the aid of velocity. The state vector includes velocity and Euler angles, and the system model contains Euler angle kinematics equations. The measured velocity in the body frame is used as the filter measurement. The quaternion nonlinear equality constraint is eliminated, and the cross-noise problem is overcome. The filter model is simple and easy to apply without linearization. Data fusion is performed by an UKF, which directly estimates and outputs the navigation information. There is no need to process navigation computation and error correction separately because the navigation computation is completed synchronously during the filter time updating. In addition, the singularities are avoided with the help of the dual-Euler method. The performance of the proposed approach is verified by road test data from a land vehicle equipped with an odometer aided SINS, and a singularity turntable test is conducted using three-axis turntable test data. The results show that the proposed approach can achieve higher navigation accuracy than the commonly-used indirect approach, and the singularities can be efficiently removed as the result of dual-Euler method. PMID:27598169

  4. Simplicity and Typical Rank Results for Three-Way Arrays

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    2011-01-01

    Matrices can be diagonalized by singular vectors or, when they are symmetric, by eigenvectors. Pairs of square matrices often admit simultaneous diagonalization, and always admit block wise simultaneous diagonalization. Generalizing these possibilities to more than two (non-square) matrices leads to methods of simplifying three-way arrays by…

  5. Conical refraction of elastic waves in absorbing crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alshits, V. I., E-mail: alshits@ns.crys.ras.ru; Lyubimov, V. N.

    2011-10-15

    The absorption-induced acoustic-axis splitting in a viscoelastic crystal with an arbitrary anisotropy is considered. It is shown that after 'switching on' absorption, the linear vector polarization field in the vicinity of the initial degeneracy point having an orientation singularity with the Poincare index n = {+-}1/2, transforms to a planar distribution of ellipses with two singularities n = {+-}1/4 corresponding to new axes. The local geometry of the slowness surface of elastic waves is studied in the vicinity of new degeneracy points and a self-intersection line connecting them. The absorption-induced transformation of the classical picture of conical refraction is studied.more » The ellipticity of waves at the edge of the self-intersection wedge in a narrow interval of propagation directions drastically changes from circular at the wedge ends to linear in the middle of the wedge. For the wave normal directed to an arbitrary point of this wedge, during movement of the displacement vector over the corresponding polarization ellipse, the wave ray velocity s runs over the same cone describing refraction in a crystal without absorption. In this case, the end of the vector moves along a universal ellipse whose plane is orthogonal to the acoustic axis for zero absorption. The areal velocity of this movement differs from the angular velocity of the displacement vector on the polarization ellipse only by a constant factor, being delayed by {pi}/2 in phase. When the wave normal is localized at the edge of the wedge in its central region, the movement of vector s along the universal ellipse becomes drastically nonuniform and the refraction transforms from conical to wedge-like.« less

  6. Definition of Contravariant Velocity Components

    NASA Technical Reports Server (NTRS)

    Hung, Ching-moa; Kwak, Dochan (Technical Monitor)

    2002-01-01

    In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.

  7. Analytic wave solution with helicon and Trivelpiece-Gould modes in an annular plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Johan; Pavarin, Daniele; Walker, Mitchell

    2009-11-26

    Helicon sources in an annular configuration have applications for plasma thrusters. The theory of Klozenberg et al.[J. P. Klozenberg B. McNamara and P. C. Thonemann, J. Fluid Mech. 21(1965) 545-563] for the propagation and absorption of helicon and Trivelpiece-Gould modes in a cylindrical plasma has been generalized for annular plasmas. Analytic solutions are found also in the annular case, but in the presence of both helicon and Trivelpiece-Gould modes, a heterogeneous linear system of equations must be solved to match the plasma and inner and outer vacuum solutions. The linear system can be ill-conditioned or even exactly singular, leading tomore » a dispersion relation with a discrete set of discontinuities. The coefficients for the analytic solution are calculated by solving the linear system with singular-value decomposition.« less

  8. Gimbal-Angle Vectors of the Nonredundant CMG Cluster

    NASA Astrophysics Data System (ADS)

    Lee, Donghun; Bang, Hyochoong

    2018-05-01

    This paper deals with the method using the preferred gimbal angles of a control moment gyro (CMG) cluster for controlling spacecraft attitude. To apply the method to the nonredundant CMG cluster, analytical gimbal-angle solutions for the zero angular momentum state are derived, and the gimbal-angle vectors for the nonzero angular momentum states are studied by a numerical method. It will be shown that the number of the gimbal-angle vectors is determined from the given skew angle and the angular momentum state of the CMG cluster. Through numerical examples, it is shown that the method using the preferred gimbal-angle is an efficient approach to avoid internal singularities for the nonredundant CMG cluster.

  9. A projection method for low speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, P.; Pao, K.

    The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.

  10. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  11. Two-stage decompositions for the analysis of functional connectivity for fMRI with application to Alzheimer’s disease risk

    PubMed Central

    Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.

    2010-01-01

    Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer’s disease risk under a verbal paired associates task. We found a indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, that was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. PMID:20227508

  12. Two-stage decompositions for the analysis of functional connectivity for fMRI with application to Alzheimer's disease risk.

    PubMed

    Caffo, Brian S; Crainiceanu, Ciprian M; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H; Bassett, Susan Spear; Pekar, James J

    2010-07-01

    Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer's disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer's disease risk under a verbal paired associates task. We found an indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, which was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  13. Influence of vorticity distribution on singularities in linearized supersonic flow

    NASA Astrophysics Data System (ADS)

    Gopal, Vijay; Maddalena, Luca

    2018-05-01

    The linearized steady three-dimensional supersonic flow can be analyzed using a vector potential approach which transforms the governing equation to a standard form of two-dimensional wave equation. Of particular interest are the canonical horseshoe line-vortex distribution and the resulting induced velocity field in supersonic flow. In this case, the singularities are present at the vortex line itself and also at the surface of the cone of influence originating from the vertices of the horseshoe structure. This is a characteristic of the hyperbolic nature of the flow which renders the study of supersonic vortex dynamics a challenging task. It is conjectured in this work that the presence of the singularity at the cone of influence is associated with the step-function nature of the vorticity distribution specified in the canonical case. At the phenomenological level, if one considers the three-dimensional steady supersonic flow, then a sudden appearance of a line-vortex will generate a ripple of singularities in the induced velocity field which convect downstream and laterally spread, at the most, to the surface of the cone of influence. Based on these findings, this work includes an exploration of potential candidates for vorticity distributions that eliminate the singularities at the cone of influence. The analysis of the resulting induced velocity field is then compared with the canonical case, and it is observed that the singularities were successfully eliminated. The manuscript includes an application of the proposed method to study the induced velocity field in a confined supersonic flow.

  14. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  15. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  16. Optical systolic solutions of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Neuman, C. P.; Casasent, D.

    1984-01-01

    The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.

  17. The Effects of City Streets on an Urban Disease Vector

    PubMed Central

    Barbu, Corentin M.; Hong, Andrew; Manne, Jennifer M.; Small, Dylan S.; Quintanilla Calderón, Javier E.; Sethuraman, Karthik; Quispe-Machaca, Víctor; Ancca-Juárez, Jenny; Cornejo del Carpio, Juan G.; Málaga Chavez, Fernando S.; Náquira, César; Levy, Michael Z.

    2013-01-01

    With increasing urbanization vector-borne diseases are quickly developing in cities, and urban control strategies are needed. If streets are shown to be barriers to disease vectors, city blocks could be used as a convenient and relevant spatial unit of study and control. Unfortunately, existing spatial analysis tools do not allow for assessment of the impact of an urban grid on the presence of disease agents. Here, we first propose a method to test for the significance of the impact of streets on vector infestation based on a decomposition of Moran's spatial autocorrelation index; and second, develop a Gaussian Field Latent Class model to finely describe the effect of streets while controlling for cofactors and imperfect detection of vectors. We apply these methods to cross-sectional data of infestation by the Chagas disease vector Triatoma infestans in the city of Arequipa, Peru. Our Moran's decomposition test reveals that the distribution of T. infestans in this urban environment is significantly constrained by streets (p<0.05). With the Gaussian Field Latent Class model we confirm that streets provide a barrier against infestation and further show that greater than 90% of the spatial component of the probability of vector presence is explained by the correlation among houses within city blocks. The city block is thus likely to be an appropriate spatial unit to describe and control T. infestans in an urban context. Characteristics of the urban grid can influence the spatial dynamics of vector borne disease and should be considered when designing public health policies. PMID:23341756

  18. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  19. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  20. The principles of quantification applied to in vivo proton MR spectroscopy.

    PubMed

    Helms, Gunther

    2008-08-01

    Following the identification of metabolite signals in the in vivo MR spectrum, quantification is the procedure to estimate numerical values of their concentrations. The two essential steps are discussed in detail: analysis by fitting a model of prior knowledge, that is, the decomposition of the spectrum into the signals of singular metabolites; then, normalization of these signals to yield concentration estimates. Special attention is given to using the in vivo water signal as internal reference.

  1. Spectral Estimation: An Overdetermined Rational Model Equation Approach.

    DTIC Science & Technology

    1982-09-15

    A-A123 122 SPECTRAL ESTIMATION: AN OVERDETERMINEO RATIONAL MODEL 1/2 EQUATION APPROACH..(U) ARIZONA STATE UNIV TEMPE DEPT OF ELECTRICAL AND COMPUTER...2 0 447,_______ 4. TITLE (mAd Sabile) S. TYPE or REPORT a PEP40D COVERED Spectral Estimation; An Overdeteruined Rational Final Report 9/3 D/8 to...andmmd&t, by uwek 7a5 4 Rational Spectral Estimation, ARMA mo~Ie1, AR model, NMA Mdle, Spectrum, Singular Value Decomposition. Adaptivb Implementatlan

  2. Through Wall Radar Classification of Human Micro-Doppler Using Singular Value Decomposition Analysis

    PubMed Central

    Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin

    2016-01-01

    The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques. PMID:27589760

  3. Through Wall Radar Classification of Human Micro-Doppler Using Singular Value Decomposition Analysis.

    PubMed

    Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin

    2016-08-31

    The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques.

  4. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  5. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  6. Singularity analysis based on wavelet transform of fractal measures for identifying geochemical anomaly in mineral exploration

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2016-02-01

    Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.

  7. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  8. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  9. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.

  10. A parallel algorithm for nonlinear convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1990-01-01

    A parallel algorithm for the efficient solution of nonlinear time-dependent convection-diffusion equations with small parameter on the diffusion term is presented. The method is based on a physically motivated domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. The method is suitable for the solution of problems arising in the simulation of fluid dynamics. Experimental results for a nonlinear equation in two-dimensions are presented.

  11. SVD analysis of Aura TES spectral residuals

    NASA Technical Reports Server (NTRS)

    Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.

    2005-01-01

    Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.

  12. Tomographic diffractive microscopy with agile illuminations for imaging targets in a noisy background.

    PubMed

    Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K

    2015-02-15

    Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.

  13. Fuzzy scalar and vector median filters based on fuzzy distances.

    PubMed

    Chatzis, V; Pitas, I

    1999-01-01

    In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.

  14. Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.

    PubMed

    Gong, Ping; Kolios, Michael C; Xu, Yuan

    2016-09-01

    Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.

  15. Anisotropic responses and initial decomposition of condensed-phase β-HMX under shock loadings via molecular dynamics simulations in conjunction with multiscale shock technique.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Song, Zhen-Fei; Chen, Xiang-Rong; Ji, Guang-Fu; Zhao, Feng; Wei, Dong-Qing

    2014-07-24

    Molecular dynamics simulations in conjunction with multiscale shock technique (MSST) are performed to study the initial chemical processes and the anisotropy of shock sensitivity of the condensed-phase HMX under shock loadings applied along the a, b, and c lattice vectors. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. Our results show that there is a difference between lattice vector a (or c) and lattice vector b in the response to a shock wave velocity of 11 km/s, which is investigated through reaction temperature and relative sliding rate between adjacent slipping planes. The response along lattice vectors a and c are similar to each other, whose reaction temperature is up to 7000 K, but quite different along lattice vector b, whose reaction temperature is only up to 4000 K. When compared with shock wave propagation along the lattice vectors a (18 Å/ps) and c (21 Å/ps), the relative sliding rate between adjacent slipping planes along lattice vector b is only 0.2 Å/ps. Thus, the small relative sliding rate between adjacent slipping planes results in the temperature and energy under shock loading increasing at a slower rate, which is the main reason leading to less sensitivity under shock wave compression along lattice vector b. In addition, the C-H bond dissociation is the primary pathway for HMX decomposition in early stages under high shock loading from various directions. Compared with the observation for shock velocities V(imp) = 10 and 11 km/s, the homolytic cleavage of N-NO2 bond was obviously suppressed with increasing pressure.

  16. Multi-color incomplete Cholesky conjugate gradient methods for vector computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, E.L.

    1986-01-01

    This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less

  17. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  18. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  19. Atomic-batched tensor decomposed two-electron repulsion integrals

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-01

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  20. Atomic-batched tensor decomposed two-electron repulsion integrals.

    PubMed

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-07

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  1. Lossless and Sufficient - Invariant Decomposition of Deterministic Target

    NASA Astrophysics Data System (ADS)

    Paladini, Riccardo; Ferro Famil, Laurent; Pottier, Eric; Martorella, Marco; Berizzi, Fabrizio

    2011-03-01

    The symmetric radar scattering matrix of a reciprocal target is projected on the circular polarization basis and is decomposed into four orientation invariant parameters, relative phase and relative orientation. The physical interpretation of this results is found in the wave-particle nature of radar scattering due to the circular polarization nature of elemental packets of energy. The proposed decomposition, is based on left orthogonal to left Special Unitary basis, providing the target description in term of a unitary vector. A comparison between the proposed CTD and Cameron, Kennaugh and Krogager decompositions is also pointed out. A validation by the use of both anechoic chamber data and airborne EMISAR data of DTU is used to show the effectiveness of this decomposition for the analysis of coherent targets. In the second paper we will show the application of the rotation group U(3) for the decomposition of distributed targets into nine meaningful parameters.

  2. Scale-invariant streamline equations and strings of singular vorticity for perturbed anisotropic solutions of the Navier-Stokes equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Libin, A., E-mail: a_libin@netvision.net.il

    2012-12-15

    A linear combination of a pair of dual anisotropic decaying Beltrami flows with spatially constant amplitudes (the Trkal solutions) with the same eigenvalue of the curl operator and of a constant velocity orthogonal vector to the Beltrami pair yields a triplet solution of the force-free Navier-Stokes equation. The amplitudes slightly variable in space (large scale perturbations) yield the emergence of a time-dependent phase between the dual Beltrami flows and of the upward velocity, which are unstable at large values of the Reynolds number. They also lead to the formation of large-scale curved prisms of streamlines with edges being the stringsmore » of singular vorticity.« less

  3. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  4. Impacts of El Niño and El Niño Modoki on the precipitation in Colombia

    NASA Astrophysics Data System (ADS)

    Córdoba Machado, Samir; Palomino Lemus, Reiner; Raquel Gámiz Fortis, Sonia; Castro Díez, Yolanda; Jesús Esteban Parra, María

    2015-04-01

    The influence of the tropical Pacific SST on precipitation in Colombia is examined using 341 stations covering the period 1979-2009. Through a Singular Value Decomposition (SVD) the two main coupled variability modes show SST patterns clearly associated with El Niño (EN) and El Niño Modoki (ENM), respectively, presenting great coupling strength with the corresponding seasonal precipitation modes in Colombia. The results reveal that, mainly in winter and summer, EN and ENM events are associated with a significant rainfall decrease over northern, central, and western Colombia. The opposite effect occurs in some localities during spring, summer, and autumn. The southwestern region of Colombia exhibits an opposite behaviour connected to EN and ENM events during years when both events do not coexist, showing that the seasonal precipitation response is not linear. The Partial Regression Analysis used to quantify separately the influence of the two types of ENSO on seasonal precipitation shows the importance of both types in the reconstruction process. The results obtained in this study establish the base for modeling and forecasting the seasonal precipitation in Colombia using the tropical Pacific SST associated with El Niño and El Niño Modoki. Keywords: Seasonal precipitation, Tropical Pacific SST, El Niño, El Niño Modoki, Singular Value Decomposition, Colombia. ACKNOWLEDGEMENTS This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).

  5. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    PubMed Central

    Maadooliat, Mehdi; Huang, Jianhua Z.

    2013-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations. PMID:22926831

  6. Explosion Source Similarity Analysis via SVD

    NASA Astrophysics Data System (ADS)

    Yedlin, Matthew; Ben Horin, Yochai; Margrave, Gary

    2016-04-01

    An important seismological ingredient for establishing a regional seismic nuclear discriminant is the similarity analysis of a sequence of explosion sources. To investigate source similarity, we are fortunate to have access to a sequence of 1805 three-component recordings of quarry blasts, shot from March 2002 to January 2015. The centroid of these blasts has an estimated location 36.3E and 29.9N. All blasts were detonated by JPMC (Jordan Phosphate Mines Co.) All data were recorded at the Israeli NDC, HFRI, located at 30.03N and 35.03E. Data were first winnowed based on the distribution of maximum amplitudes in the neighborhood of the P-wave arrival. The winnowed data were then detrended using the algorithm of Cleveland et al (1990). The detrended data were bandpass filtered between .1 to 12 Hz using an eighth order Butterworth filter. Finally, data were sorted based on maximum trace amplitude. Two similarity analysis approaches were used. First, for each component, the entire suite of traces was decomposed into its eigenvector representation, by employing singular-valued decomposition (SVD). The data were then reconstructed using 10 percent of the singular values, with the resulting enhancement of the S-wave and surface wave arrivals. The results of this first method are then compared to the second analysis method based on the eigenface decomposition analysis of Turk and Pentland (1991). While both methods yield similar results in enhancement of data arrivals and reduction of data redundancy, more analysis is required to calibrate the recorded data to charge size, a quantity that was not available for the current study. References Cleveland, R. B., Cleveland, W. S., McRae, J. E., and Terpenning, I., Stl: A seasonal-trend decomposition procedure based on loess, Journal of Official Statistics, 6, No. 1, 3-73, 1990. Turk, M. and Pentland, A., Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1), 71-86, 1991.

  7. Characterizing omega-limit sets which are closed orbits

    NASA Astrophysics Data System (ADS)

    Bautista, S.; Morales, C.

    Let X be a vector field in a compact n-manifold M, n⩾2. Given Σ⊂M we say that q∈M satisfies (P) Σ if the closure of the positive orbit of X through q does not intersect Σ, but, however, there is an open interval I with q as a boundary point such that every positive orbit through I intersects Σ. Among those q having saddle-type hyperbolic omega-limit set ω(q) the ones with ω(q) being a closed orbit satisfy (P) Σ for some closed subset Σ. The converse is true for n=2 but not for n⩾4. Here we prove the converse for n=3. Moreover, we prove for n=3 that if ω(q) is a singular-hyperbolic set [C. Morales, M. Pacifico, E. Pujals, On C robust singular transitive sets for three-dimensional flows, C. R. Acad. Sci. Paris Sér. I 26 (1998) 81-86], [C. Morales, M. Pacifico, E. Pujals, Robust transitive singular sets for 3-flows are partially hyperbolic attractors or repellers, Ann. of Math. (2) 160 (2) (2004) 375-432], then ω(q) is a closed orbit if and only if q satisfies (P) Σ for some Σ closed. This result improves [S. Bautista, Sobre conjuntos hiperbólicos-singulares (On singular-hyperbolic sets), thesis Uiversidade Federal do Rio de Janeiro, 2005 (in Portuguese)] and [C. Morales, M. Pacifico, Mixing attractors for 3-flows, Nonlinearity 14 (2001) 359-378].

  8. Rationality of moduli space of torsion-free sheaves over reducible curve

    NASA Astrophysics Data System (ADS)

    Dey, Arijit; Suhas, B. N.

    2018-06-01

    Let M(2 , w ̲ , χ) be the moduli space of rank 2 torsion-free sheaves of fixed determinant and odd Euler characteristic over a reducible nodal curve with each irreducible component having utmost two nodal singularities. We show that in each irreducible component of M(2 , w ̲ , χ) , the closure of rank 2 vector bundles is rational.

  9. Diametrical clustering for identifying anti-correlated gene clusters.

    PubMed

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  10. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  11. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  12. Computation at a coordinate singularity

    NASA Astrophysics Data System (ADS)

    Prusa, Joseph M.

    2018-05-01

    Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.

  13. Spectral and entropic characterizations of Wigner functions: applications to model vibrational systems.

    PubMed

    Luzanov, A V

    2008-09-07

    The Wigner function for the pure quantum states is used as an integral kernel of the non-Hermitian operator K, to which the standard singular value decomposition (SVD) is applied. It provides a set of the squared singular values treated as probabilities of the individual phase-space processes, the latter being described by eigenfunctions of KK(+) (for coordinate variables) and K(+)K (for momentum variables). Such a SVD representation is employed to obviate the well-known difficulties in the definition of the phase-space entropy measures in terms of the Wigner function that usually allows negative values. In particular, the new measures of nonclassicality are constructed in the form that automatically satisfies additivity for systems composed of noninteracting parts. Furthermore, the emphasis is given on the geometrical interpretation of the full entropy measure as the effective phase-space volume in the Wigner picture of quantum mechanics. The approach is exemplified by considering some generic vibrational systems. Specifically, for eigenstates of the harmonic oscillator and a superposition of coherent states, the singular value spectrum is evaluated analytically. Numerical computations are given for the nonlinear problems (the Morse and double well oscillators, and the Henon-Heiles system). We also discuss the difficulties in implementation of a similar technique for electronic problems.

  14. Use of principle velocity patterns in the analysis of structural acoustic optimization.

    PubMed

    Johnson, Wayne M; Cunefare, Kenneth A

    2007-02-01

    This work presents an application of principle velocity patterns in the analysis of the structural acoustic design optimization of an eight ply composite cylindrical shell. The approach consists of performing structural acoustic optimizations of a composite cylindrical shell subject to external harmonic monopole excitation. The ply angles are used as the design variables in the optimization. The results of the ply angle design variable formulation are interpreted using the singular value decomposition of the interior acoustic potential energy. The decomposition of the acoustic potential energy provides surface velocity patterns associated with lower levels of interior noise. These surface velocity patterns are shown to correspond to those from the structural acoustic optimization results. Thus, it is demonstrated that the capacity to design multi-ply composite cylinders for quiet interiors is determined by how well the cylinder be can designed to exhibit particular surface velocity patterns associated with lower noise levels.

  15. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  16. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  17. Theoretical aspects of fracture mechanics

    NASA Astrophysics Data System (ADS)

    Atkinson, C.; Craster, R. V.

    1995-03-01

    In this review we try to cover various topics in fracture mechanics in which mathematical analysis can be used both to aid numerical methods and cast light on key features of the stress field. The dominant singular near crack tip stress field can often be parametrized in terms of three parameters K(sub I), K(sub II) and K(sub III) designating three fracture modes each having an angular variation entirely specified for the stress tensor and displacement vector. These results and contact zone models for removing the interpenetration anomaly are described. Generalizations of the above results to viscoelastic media are described. For homogeneous media with constant Poisson's ratio the angular variation of singular crack tip stresses and displacements are shown to be the same for all time and the same inverse square root singularity as occurs in the elastic medium case is found (this being true for a time varying Poisson ratio too). Only the stress intensity factor varies through time dependence of loads and relaxation properties of the medium. For cracks against bimaterial interfaces both the stress singularity and angular form evolve with time as a function of the time dependent properties of the bimaterial. Similar behavior is identified for sharp notches in viscoelastic plates. The near crack tip behavior in material with non-linear stress strain laws is also identified and stress singularities classified in terms of the hardening exponent for power law hardening materials. Again for interface cracks the near crack tip behavior requires careful analysis and it is shown that more than one singular term may be present in the near crack tip stress field. A variety of theory and applications is presented for inhomogeneous elastic media, coupled thermoelasticity etc. Methods based on reciprocal theorems and dual functions which can also aid in getting awkward singular stress behavior from numerical solutions are also reviewed. Finally theoretical calculations of fiber reinforced and particulate composite toughening mechanisms are briefly reviewed.

  18. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  19. Noncolocated Time-Reversal MUSIC: High-SNR Distribution of Null Spectrum

    NASA Astrophysics Data System (ADS)

    Ciuonzo, Domenico; Rossi, Pierluigi Salvo

    2017-04-01

    We derive the asymptotic distribution of the null spectrum of the well-known Multiple Signal Classification (MUSIC) in its computational Time-Reversal (TR) form. The result pertains to a single-frequency non-colocated multistatic scenario and several TR-MUSIC variants are here investigated. The analysis builds upon the 1st-order perturbation of the singular value decomposition and allows a simple characterization of null-spectrum moments (up to the 2nd order). This enables a comparison in terms of spectrums stability. Finally, a numerical analysis is provided to confirm the theoretical findings.

  20. Notes on implementation of Coulomb friction in coupled dynamical simulations

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.; Singh, R. P.

    1987-01-01

    A coupled dynamical system is defined as an assembly of rigid/flexible bodies that may be coupled by kinematic connections. The interfaces between bodies are modeled using hinges having 0 to 6 degrees of freedom. The equations of motion are presented for a mechanical system of n flexible bodies in a topological tree configuration. The Lagrange form of the D'Alembert principle was employed to derive the equations. The equations of motion are augmented by the kinematic constraint equations. This augmentation is accomplished via the method of singular value decomposition.

  1. A trade-off between model resolution and variance with selected Rayleigh-wave data

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.

  2. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  3. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  4. Two-order parameters theory of the metal-insulator phase transition kinetics in the magnetic field

    NASA Astrophysics Data System (ADS)

    Dubovskii, L. B.

    2018-05-01

    The metal-insulator phase transition is considered within the framework of the Ginzburg-Landau approach for the phase transition described with two coupled order parameters. One of the order parameters is the mass density which variation is responsible for the origin of nonzero overlapping of the two different electron bands and the appearance of free electron carriers. This transition is assumed to be a first-order phase one. The free electron carriers are described with the vector-function representing the second-order parameter responsible for the continuous phase transition. This order parameter determines mostly the physical properties of the metal-insulator transition and leads to a singularity of the surface tension at the metal-insulator interface. The magnetic field is involved into the consideration of the system. The magnetic field leads to new singularities of the surface tension at the metal-insulator interface and results in a drastic variation of the phase transition kinetics. A strong singularity in the surface tension results from the Landau diamagnetism and determines anomalous features of the metal-insulator transition kinetics.

  5. Quasinormal Frequencies of D-Dimensional Schwarzschild Black Holes: Evaluation via Continued Fraction Method

    NASA Astrophysics Data System (ADS)

    Rostworowski, A.

    2007-01-01

    We adopt Leaver's [E. Leaver, {ITALIC Proc. R. Soc. Lond.} {A402}, 285 (1985)] method to determine quasi normal frequencies of the Schwarzschild black hole in higher (D geq 10) dimensions. In D-dimensional Schwarzschild metric, when D increases, more and more singularities, spaced uniformly on the unit circle |r|=1, approach the horizon at r=rh=1. Thus, a solution satisfying the outgoing wave boundary condition at the horizon must be continued to some mid point and only then the continued fraction condition can be applied. This prescription is general and applies to all cases for which, due to regular singularities on the way from the point of interest to the irregular singularity, Leaver's method in its original setting breaks down. We illustrate the method calculating gravitational vector and tensor quasinormal frequencies of the Schwarzschild black hole in D=11 and D=10 dimensions. We also give the details for the D=9 case, considered in the work of P. Bizoz, T. Chmaj, A. Rostworowski, B.G. Schmidt and Z. Tabor {ITALIC Phys. Rev.}{D72}, 121502(R) (2005) .

  6. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  7. The presence of a phantom field in a Randall–Sundrum scenario

    NASA Astrophysics Data System (ADS)

    Acuña-Cárdenas, Rubén O.; Astorga-Moreno, J. A.; García-Aspeitia, Miguel A.; López-Domínguez, J. C.

    2018-02-01

    The presence of phantom dark energy in brane world cosmology generates important new effects, causing a premature big rip singularity when we increase the presence of extra dimensions and considerably competing with the other components of our Universe. This article first considers only a field with the characteristic equation ω<-1 and then the explicit form of the scalar field with a potential with a maximum (with the aim of avoiding a big rip singularity). In both cases we study the dynamics robustly through dynamical analysis theory, considering in detail parameters such as the deceleration q and the vector field associated to the dynamical system. Results are discussed with the purpose of treating the cosmology with a phantom field as dark energy in a Randall–Sundrum scenario.

  8. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  9. Attitude control with realization of linear error dynamics

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1993-01-01

    An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.

  10. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  11. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  12. Singular vector-based targeted observations of chemical constituents: description and first application of the EURAD-IM-SVA v1.0

    NASA Astrophysics Data System (ADS)

    Goris, N.; Elbern, H.

    2015-12-01

    Measurements of the large-dimensional chemical state of the atmosphere provide only sparse snapshots of the state of the system due to their typically insufficient temporal and spatial density. In order to optimize the measurement configurations despite those limitations, the present work describes the identification of sensitive states of the chemical system as optimal target areas for adaptive observations. For this purpose, the technique of singular vector analysis (SVA), which has proven effective for targeted observations in numerical weather prediction, is implemented in the EURAD-IM (EURopean Air pollution and Dispersion - Inverse Model) chemical transport model, yielding the EURAD-IM-SVA v1.0. Besides initial values, emissions are investigated as critical simulation controlling targeting variables. For both variants, singular vectors are applied to determine the optimal placement for observations and moreover to quantify which chemical compounds have to be observed with preference. Based on measurements of the airship based ZEPTER-2 campaign, the EURAD-IM-SVA v1.0 has been evaluated by conducting a comprehensive set of model runs involving different initial states and simulation lengths. For the sake of brevity, we concentrate our attention on the following chemical compounds, O3, NO, NO2, HCHO, CO, HONO, and OH, and focus on their influence on selected O3 profiles. Our analysis shows that the optimal placement for observations of chemical species is not entirely determined by mere transport and mixing processes. Rather, a combination of initial chemical concentrations, chemical conversions, and meteorological processes determines the influence of chemical compounds and regions. We furthermore demonstrate that the optimal placement of observations of emission strengths is highly dependent on the location of emission sources and that the benefit of including emissions as target variables outperforms the value of initial value optimization with growing simulation length. The obtained results confirm the benefit of considering both initial values and emission strengths as target variables and of applying the EURAD-IM-SVA v1.0 for measurement decision guidance with respect to chemical compounds.

  13. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  14. Multilayer neural networks for reduced-rank approximation.

    PubMed

    Diamantaras, K I; Kung, S Y

    1994-01-01

    This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

  15. Cross-language information retrieval using PARAFAC2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Chew, Peter; Abdelali, Ahmed

    A standard approach to cross-language information retrieval (CLIR) uses Latent Semantic Analysis (LSA) in conjunction with a multilingual parallel aligned corpus. This approach has been shown to be successful in identifying similar documents across languages - or more precisely, retrieving the most similar document in one language to a query in another language. However, the approach has severe drawbacks when applied to a related task, that of clustering documents 'language-independently', so that documents about similar topics end up closest to one another in the semantic space regardless of their language. The problem is that documents are generally more similar tomore » other documents in the same language than they are to documents in a different language, but on the same topic. As a result, when using multilingual LSA, documents will in practice cluster by language, not by topic. We propose a novel application of PARAFAC2 (which is a variant of PARAFAC, a multi-way generalization of the singular value decomposition [SVD]) to overcome this problem. Instead of forming a single multilingual term-by-document matrix which, under LSA, is subjected to SVD, we form an irregular three-way array, each slice of which is a separate term-by-document matrix for a single language in the parallel corpus. The goal is to compute an SVD for each language such that V (the matrix of right singular vectors) is the same across all languages. Effectively, PARAFAC2 imposes the constraint, not present in standard LSA, that the 'concepts' in all documents in the parallel corpus are the same regardless of language. Intuitively, this constraint makes sense, since the whole purpose of using a parallel corpus is that exactly the same concepts are expressed in the translations. We tested this approach by comparing the performance of PARAFAC2 with standard LSA in solving a particular CLIR problem. From our results, we conclude that PARAFAC2 offers a very promising alternative to LSA not only for multilingual document clustering, but also for solving other problems in cross-language information retrieval.« less

  16. Lefschetz thimbles in fermionic effective models with repulsive vector-field

    NASA Astrophysics Data System (ADS)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2018-06-01

    We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.

  17. Simplified moment tensor analysis and unified decomposition of acoustic emission source: Application to in situ hydrofracturing test

    NASA Astrophysics Data System (ADS)

    Ohtsu, Masayasu

    1991-04-01

    An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.

  18. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  19. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  20. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications.

    PubMed

    Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P

    2016-04-13

    An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).

  1. Warps, grids and curvature in triple vector bundles

    NASA Astrophysics Data System (ADS)

    Flari, Magdalini K.; Mackenzie, Kirill

    2018-06-01

    A triple vector bundle is a cube of vector bundle structures which commute in the (strict) categorical sense. A grid in a triple vector bundle is a collection of sections of each bundle structure with certain linearity properties. A grid provides two routes around each face of the triple vector bundle, and six routes from the base manifold to the total manifold; the warps measure the lack of commutativity of these routes. In this paper we first prove that the sum of the warps in a triple vector bundle is zero. The proof we give is intrinsic and, we believe, clearer than the proof using decompositions given earlier by one of us. We apply this result to the triple tangent bundle T^3M of a manifold and deduce (as earlier) the Jacobi identity. We further apply the result to the triple vector bundle T^2A for a vector bundle A using a connection in A to define a grid in T^2A . In this case the curvature emerges from the warp theorem.

  2. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  3. Multifractal vector fields and stochastic Clifford algebra.

    PubMed

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2015-12-01

    In the mid 1980s, the development of multifractal concepts and techniques was an important breakthrough for complex system analysis and simulation, in particular, in turbulence and hydrology. Multifractals indeed aimed to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations or on simplified conceptual models. However, this development has been rather limited to deal with scalar fields, whereas most of the fields of interest are vector-valued or even manifold-valued. We show in this paper that the combination of stable Lévy processes with Clifford algebra is a good candidate to bridge up the present gap between theory and applications. We show that it indeed defines a convenient framework to generate multifractal vector fields, possibly multifractal manifold-valued fields, based on a few fundamental and complementary properties of Lévy processes and Clifford algebra. In particular, the vector structure of these algebra is much more tractable than the manifold structure of symmetry groups while the Lévy stability grants a given statistical universality.

  4. S-matrix decomposition, natural reaction channels, and the quantum transition state approach to reactive scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de

    2016-05-28

    A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less

  5. Can sun-induced chlorophyll fluorescence track diurnal variations of GPP in an evergreen needle leaf forest?

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ryu, Y.; Dechant, B.; Cho, S.; Kim, H. S.; Yang, K.

    2017-12-01

    The emerging technique of remotely sensed sun-induced fluorescence (SIF) has advanced our ability to estimate plant photosynthetic activity at regional and global scales. Continuous observations of SIF and gross primary productivity (GPP) at the canopy scale in evergreen needleleaf forests, however, have not yet been presented in the literature so far. Here, we report a time series of near-surface measurements of canopy-scale SIF, hyperspectral reflectance and GPP during the senescence period in an evergreen needleleaf forest in South Korea. Mean canopy height was 30 m and a hyperspectrometer connected with a single fiber and rotating prism, which measures bi-hemispheric irradiance, was installed 20 m above the canopy. SIF was retrieved in the spectral range 740-790 nm at a temporal resolution of 1 min. We tested different SIF retrieval methods, such as Fraunhofer line depth (FLD), spectral fitting method (SFM) and singular vector decomposition (SVD) against GPP estimated by eddy covariance and absorbed photosynthetically active radiation (APAR). We found that the SVD-retrieved SIF signal shows linear relationships with GPP (R2 = 0.63) and APAR (R2 = 0.52) while SFM- and FLD-retrieved SIF performed poorly. We suspect the larger influence of atmospheric oxygen absorption between the sensor and canopy might explain why SFM and FLD methods showed poor results. Data collection will continue and the relationships between SIF, GPP and APAR will be studied during the senescence period.

  6. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    NASA Astrophysics Data System (ADS)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  7. Analysis and modelling of septic shock microarray data using Singular Value Decomposition.

    PubMed

    Allanki, Srinivas; Dixit, Madhulika; Thangaraj, Paul; Sinha, Nandan Kumar

    2017-06-01

    Being a high throughput technique, enormous amounts of microarray data has been generated and there arises a need for more efficient techniques of analysis, in terms of speed and accuracy. Finding the differentially expressed genes based on just fold change and p-value might not extract all the vital biological signals that occur at a lower gene expression level. Besides this, numerous mathematical models have been generated to predict the clinical outcome from microarray data, while very few, if not none, aim at predicting the vital genes that are important in a disease progression. Such models help a basic researcher narrow down and concentrate on a promising set of genes which leads to the discovery of gene-based therapies. In this article, as a first objective, we have used the lesser known and used Singular Value Decomposition (SVD) technique to build a microarray data analysis tool that works with gene expression patterns and intrinsic structure of the data in an unsupervised manner. We have re-analysed a microarray data over the clinical course of Septic shock from Cazalis et al. (2014) and have shown that our proposed analysis provides additional information compared to the conventional method. As a second objective, we developed a novel mathematical model that predicts a set of vital genes in the disease progression that works by generating samples in the continuum between health and disease, using a simple normal-distribution-based random number generator. We also verify that most of the predicted genes are indeed related to septic shock. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. F4 , E6 and G2 exceptional gauge groups in the vacuum domain structure model

    NASA Astrophysics Data System (ADS)

    Shahlaei, Amir; Rafibakhsh, Shahnoosh

    2018-03-01

    Using a vacuum domain structure model, we calculate trivial static potentials in various representations of F4 , E6, and G2 exceptional groups by means of the unit center element. Due to the absence of the nontrivial center elements, the potential of every representation is screened at far distances. However, the linear part is observed at intermediate quark separations and is investigated by the decomposition of the exceptional group to its maximal subgroups. Comparing the group factor of the supergroup with the corresponding one obtained from the nontrivial center elements of S U (3 ) subgroup shows that S U (3 ) is not the direct cause of temporary confinement in any of the exceptional groups. However, the trivial potential obtained from the group decomposition into the S U (3 ) subgroup is the same as the potential of the supergroup itself. In addition, any regular or singular decomposition into the S U (2 ) subgroup that produces the Cartan generator with the same elements as h1, in any exceptional group, leads to the linear intermediate potential of the exceptional gauge groups. The other S U (2 ) decompositions with the Cartan generator different from h1 are still able to describe the linear potential if the number of S U (2 ) nontrivial center elements that emerge in the decompositions is the same. As a result, it is the center vortices quantized in terms of nontrivial center elements of the S U (2 ) subgroup that give rise to the intermediate confinement in the static potentials.

  9. Are Bred Vectors The Same As Lyapunov Vectors?

    NASA Astrophysics Data System (ADS)

    Kalnay, E.; Corazza, M.; Cai, M.

    Regional loss of predictability is an indication of the instability of the underlying flow, where small errors in the initial conditions (or imperfections in the model) grow to large amplitudes in finite times. The stability properties of evolving flows have been studied using Lyapunov vectors (e.g., Alligood et al, 1996, Ott, 1993, Kalnay, 2002), singular vectors (e.g., Lorenz, 1965, Farrell, 1988, Molteni and Palmer, 1993), and, more recently, with bred vectors (e.g., Szunyogh et al, 1997, Cai et al, 2001). Bred vectors (BVs) are, by construction, closely related to Lyapunov vectors (LVs). In fact, after an infinitely long breeding time, and with the use of infinitesimal ampli- tudes, bred vectors are identical to leading Lyapunov vectors. In practical applications, however, bred vectors are different from Lyapunov vectors in two important ways: a) bred vectors are never globally orthogonalized and are intrinsically local in space and time, and b) they are finite-amplitude, finite-time vectors. These two differences are very significant in a dynamical system whose size is very large. For example, the at- mosphere is large enough to have "room" for several synoptic scale instabilities (e.g., storms) to develop independently in different regions (say, North America and Aus- tralia), and it is complex enough to have several different possible types of instabilities (such as barotropic, baroclinic, convective, and even Brownian motion). Bred vectors share some of their properties with leading LVs (Corazza et al, 2001a, 2001b, Toth and Kalnay, 1993, 1997, Cai et al, 2001). For example, 1) Bred vectors are independent of the norm used to define the size of the perturba- tion. Corazza et al. (2001) showed that bred vectors obtained using a potential enstro- phy norm were indistinguishable from bred vectors obtained using a streamfunction squared norm, in contrast with singular vectors. 2) Bred vectors are independent of the length of the rescaling period as long as the perturbations remain approximately linear (for example, for atmospheric models the interval for rescaling could be varied between a single time step and 1 day without affecting qualitatively the characteristics of the bred vectors. However, the finite-amplitude, finite-time, and lack of orthogonalization of the BVs introduces important differences with LVs: 1) In regions that undergo strong instabilities, the bred vectors tend to be locally domi- 1 nated by simple, low-dimensional structures. Patil et al (2001) showed that the BV-dim (appendix) gives a good estimate of the number of dominant directions (shapes) of the local k bred vectors. For example, if half of them are aligned in one direction, and half in a different direction, the BV-dim is about two. If the majority of the bred vectors are aligned predominantly in one direction and only a few are aligned in a second direction, then the BV-dim is between 1 and 2. Patil et al., (2001) showed that the regions with low dimensionality cover about 20% of the atmosphere. They also found that these low-dimensionality regions have a very well defined vertical structure, and a typical lifetime of 3-7 days. The low dimensionality identifies regions where the in- stability of the basic flow has manifested itself in a low number of preferred directions of perturbation growth. 2) Using a Quasi-Geostrophic simulation system of data assimilation developed by Morss (1999), Corazza et al (2001a, b) found that bred vectors have structures that closely resemble the background (short forecasts used as first guess) errors, which in turn dominate the local analysis errors. This is especially true in regions of low dimensionality, which is not surprising if these are unstable regions where errors grow in preferred shapes. 3) The number of bred vectors needed to represent the unstable subspace in the QG system is small (about 6-10). This was shown by computing the local BV-dim as a function of the number of independent bred vectors. Convergence in the local dimen- sion starts to occur at about 6 BVs, and is essentially complete when the number of vectors is about 10-15 (Corazza et al, 2001a). This should be contrasted with the re- sults of Snyder and Joly (1998) and Palmer et al (1998) who showed that hundreds of Lyapunov vectors with positive Lyapunov exponents are needed to represent the attractor of the system in quasi-geostrophic models. 4) Since only a few bred vectors are needed, and background errors project strongly in the subspace of bred vectors, Corazza et al (2001b) were able to develop cost-efficient methods to improve the 3D-Var data assimilation by adding to the background error covariance terms proportional to the outer product of the bred vectors, thus represent- ing the "errors of the day". This approach led to a reduction of analysis error variance of about 40% at very low cost. 5) The fact that BVs have finite amplitude provides a natural way to filter out instabil- ities present in the system that have fast growth, but saturate nonlinearly at such small amplitudes that they are irrelevant for ensemble perturbations. As shown by Lorenz (1996) Lyapunov vectors (and singular vectors) of models including these physical phenomena would be dominated by the fast but small amplitude instabilities, unless they are explicitly excluded from the linearized models. Bred vectors, on the other 2 hand, through the choice of an appropriate size for the perturbation, provide a natural filter based on nonlinear saturation of fast but irrelevant instabilities. 6) Every bred vector is qualitatively similar to the *leading* LV. LVs beyond the leading LV are obtained by orthogonalization after each time step with respect to the previous LVs subspace. The orthogonalization requires the introduction of a norm. With an enstrophy norm, the successive LVs have larger and larger horizontal scales, and a choice of a stream function norm would lead to successively smaller scales in the LVs. Beyond the first few LVs, there is little qualitative similarity between the background errors and the LVs. In summary, in a system like the atmosphere with enough physical space for several independent local instabilities, BVs and LVs share some properties but they also have significant differences. BV are finite-amplitude, finite-time, and because they are not globally orthogonalized, they have local properties in space. Bred vectors are akin to the leading LV, but bred vectors derived from different arbitrary initial perturba- tions remain distinct from each other, instead of collapsing into a single leading vec- tor, presumably because the nonlinear terms and physical parameterizations introduce sufficient stochastic forcing to avoid such convergence. As a result, there is no need for global orthogonalization, and the number of bred vectors required to describe the natural instabilities in an atmospheric system (from a local point of view) is much smaller than the number of Lyapunov vectors with positive Lyapunov exponents. The BVs are independent of the norm, whereas the LVs beyond the first one do depend on the choice of norm: for example, they become larger in scale with a vorticity norm, and smaller with a stream function norm. These properties of BVs result in significant advantages for data assimilation and en- semble forecasting for the atmosphere. Errors in the analysis have structures very similar to bred vectors, and it is found that they project very strongly on the subspace of a few bred vectors. This is not true for either Lyapunov vectors beyond the lead- ing LVs, or for singular vectors unless they are constructed with a norm based on the analysis error covariance matrix (or a bred vector covariance). The similarity between bred vectors and analysis errors leads to the ability to include "errors of the day" in the background error covariance and a significant improvement of the analysis beyond 3D-Var at a very low cost (Corazza, 2001b). References Alligood K. T., T. D. Sauer and J. A. Yorke, 1996: Chaos: an introduction to dynamical systems. Springer-Verlag, New York. Buizza R., J. Tribbia, F. Molteni and T. Palmer, 1993: Computation of optimal unstable 3 structures for numerical weather prediction models. Tellus, 45A, 388-407. Cai, M., E. Kalnay and Z. Toth, 2001: Potential impact of bred vectors on ensemble forecasting and data assimilation in the Zebiak-Cane model. Submitted to J of Climate. Corazza, M., E. Kalnay, D. J. Patil, R. Morss, M. Cai, I. Szunyogh, B. R. Hunt, E. Ott and J. Yorke, 2001: Use of the breeding technique to determine the structure of the "errors of the day". Submitted to Nonlinear Processes in Geophysics. Corazza, M., E. Kalnay, DJ Patil, E. Ott, J. Yorke, I Szunyogh and M. Cai, 2001: Use of the breeding technique in the estimation of the background error covariance matrix for a quasigeostrophic model. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Farrell, B., 1988: Small error dynamics and the predictability of atmospheric flow, J. Atmos. Sciences, 45, 163-172. Kalnay, E 2002: Atmospheric modeling, data assimilation and predictability. Chapter 6. Cambridge University Press, UK. In press. Kalnay E and Z Toth 1994: Removing growing errors in the analysis. Preprints, Tenth Conference on Numerical Weather Prediction, pp 212-215. Amer. Meteor. Soc., July 18-22, 1994. Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 21, 289-307. Lorenz, E.N., 1996: Predictability- A problem partly solved. Proceedings of the ECMWF Seminar on Predictability, Reading, England, Vol. 1 1-18. Molteni F. and TN Palmer, 1993: Predictability and finite-time instability of the north- ern winter circulation. Q. J. Roy. Meteorol. Soc. 119, 269-298. Morss, R.E.: 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. Thesis, Massachussetts Institute of Technology, 225pp. Ott, E., 1993: Chaos in Dynamical Systems. Cambridge University Press. New York. Palmer, TN, R. Gelaro, J. Barkmeijer and R. Buizza, 1998: Singular vectors, metrics and adaptive observations. J. Atmos Sciences, 55, 633-653. Patil, DJ, BR Hunt, E Kalnay, J. Yorke, and E. Ott, 2001: Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett., 86, 5878. Patil, DJ, I. Szunyogh, BR Hunt, E Kalnay, E Ott, and J. Yorke, 2001: Using large 4 member ensembles to isolate local low dimensionality of atmospheric dynamics. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Snyder, C. and A. Joly, 1998: Development of perturbations within growing baroclinic waves. Q. J. Roy. Meteor. Soc., 124, pp 1961. Szunyogh, I, E. Kalnay and Z. Toth, 1997: A comparison of Lyapunov and Singular vectors in a low resolution GCM. Tellus, 49A, 200-227. Toth, Z and E Kalnay 1993: Ensemble forecasting at NMC - the generation of pertur- bations. Bull. Amer. Meteorol. Soc., 74, 2317-2330. Toth, Z and E Kalnay 1997: Ensemble forecasting at NCEP and the breeding method. Mon Wea Rev, 125, 3297-3319. * Corresponding author address: Eugenia Kalnay, Meteorology Depart- ment, University of Maryland, College Park, MD 20742-2425, USA; email: ekalnay@atmos.umd.edu Appendix: BV-dimension Patil et al., (2001) defined local bred vectors around a point in the 3-dimensional grid of the model by taking the 24 closest horizontal neighbors. If there are k bred vectors available, and N model variables for each grid point, the k local bred vectors form the columns of a 25Nxk matrix B. The kxk covariance matrix is C=B^T B. Its eigen- values are positive, and its eigenvectors v(i) are the singular vectors of the local bred vector subspace. The Bred Vector dimension (BV-dim) measures the local effective dimension: BV-dim[s,s,...,s(k)]={SUM[s(i)]}^2/SUM[s(i)]^2 where s(i) are the square roots of the eigenvalues of the covariance matrix. 5

  10. Equivalent Dipole Vector Analysis for Detecting Pulmonary Hypertension

    NASA Technical Reports Server (NTRS)

    Harlander, Matevz; Salobir, Barbara; Toplisek, Janez; Schlegel, Todd T.; Starc, Vito

    2010-01-01

    Various 12-lead ECG criteria have been established to detect right ventricular hypertrophy as a marker of pulmonary hypertension (PH). While some criteria offer good specificity they lack sensitivity because of a low prevalence of positive findings in the PH population. We hypothesized that three-dimensional equivalent dipole (ED) model could serve as a better detection tool of PH. We enrolled: 1) 17 patients (12 female, 5 male, mean age 57 years, range 19-79 years) with echocardiographically detected PH (systolic pulmonary arterial pressure greater than 35 mmHg) and no significant left ventricular disease; and 2) 19 healthy controls (7 female, 12 male, mean age 44, range 31-53 years) with no known heart disease. In each subject we recorded a 5-minute high-resolution 12-lead conventional ECG and constructed principal signals using singular value decomposition. Assuming a standard thorax dimension of an adult person with homogenous and isotropic distribution of thorax conductance, we determined moving equivalent dipoles (ED), characterized by the 3D location in the thorax, dipolar strength and the spatial orientation, in time intervals of 5 ms. We used the sum of all ED vectors in the second half of the QRS complex to derive the amplitude of the right-sided ED vector (RV), if the orientation of ED was to the right side of the thorax, and in the first half the QRS to derive the amplitude of the left-sided vector (LV), if the orientation was leftward. Finally, the parameter RV/LV ratio was determined over an average of 256 complexes. The groups differed in age and gender to some extent. There was a non-significant trend toward higher RV in patients with PH (438 units 284) than in controls (280 plus or minus 140) (p = 0.066) but the overlap was such that RV alone was not a good predictor of PH. On the other hand, the RV/LV ratio was a better predictor of PH, with 11/17 (64.7%) of PH patients but only in 1/19 (5.3%) control subjects having RV/LV ratio greater than or equal to 0.70 (p less than 0.001). The use of ED for evaluating PH shows good specificity at a reasonable sensitivity. The results are limited due to the small study groups and differences in age and gender, but further investigations are warranted, including of ED's diagnostic accuracy for PH versus that of other proposed ECG and VCG criteria.

  11. Discriminative Dictionary Learning With Two-Level Low Rank and Group Sparse Decomposition for Image Classification.

    PubMed

    Wen, Zaidao; Hou, Zaidao; Jiao, Licheng

    2017-11-01

    Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.

  12. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  13. Correlation between solar flare productivity and photospheric vector magnetic fields

    NASA Astrophysics Data System (ADS)

    Cui, Yanmei; Wang, Huaning

    2008-11-01

    Studying the statistical correlation between the solar flare productivity and photospheric magnetic fields is very important and necessary. It is helpful to set up a practical flare forecast model based on magnetic properties and improve the physical understanding of solar flare eruptions. In the previous study ([Cui, Y.M., Li, R., Zhang, L.Y., He, Y.L., Wang, H.N. Correlation between solar flare productivity and photospheric magnetic field properties 1. Maximum horizontal gradient, length of neutral line, number of singular points. Sol. Phys. 237, 45 59, 2006]; from now on we refer to this paper as ‘Paper I’), three measures of the maximum horizontal gradient, the length of the neutral line, and the number of singular points are computed from 23990 SOHO/MDI longitudinal magnetograms. The statistical relationship between the solar flare productivity and these three measures is well fitted with sigmoid functions. In the current work, the three measures of the length of strong-shear neutral line, total unsigned current, and total unsigned current helicity are computed from 1353 vector magnetograms observed at Huairou Solar Observing Station. The relationship between the solar flare productivity and the current three measures can also be well fitted with sigmoid functions. These results are expected to be beneficial to future operational flare forecasting models.

  14. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  15. Measurement of the topological charge and index of vortex vector optical fields with a space-variant half-wave plate.

    PubMed

    Liu, Gui-Geng; Wang, Ke; Lee, Yun-Han; Wang, Dan; Li, Ping-Ping; Gou, Fangwang; Li, Yongnan; Tu, Chenghou; Wu, Shin-Tson; Wang, Hui-Tian

    2018-02-15

    Vortex vector optical fields (VVOFs) refer to a kind of vector optical field with an azimuth-variant polarization and a helical phase, simultaneously. Such a VVOF is defined by the topological index of the polarization singularity and the topological charge of the phase vortex. We present a simple method to measure the topological charge and index of VVOFs by using a space-variant half-wave plate (SV-HWP). The geometric phase grating of the SV-HWP diffracts a VVOF into ±1 orders with orthogonally left- and right-handed circular polarizations. By inserting a polarizer behind the SV-HWP, the two circular polarization states project into the linear polarization and then interfere with each other to form the interference pattern, which enables the direct measurement of the topological charge and index of VVOFs.

  16. Visualization of the energy flow for guided forward and backward waves in and around a fluid-loaded elastic cylindrical shell via the Poynting vector field

    NASA Astrophysics Data System (ADS)

    Dean, Cleon E.; Braselton, James P.

    2004-05-01

    Color-coded and vector-arrow grid representations of the Poynting vector field are used to show the energy flow in and around a fluid-loaded elastic cylindrical shell for both forward- and backward-propagating waves. The present work uses a method adapted from a simpler technique due to Kaduchak and Marston [G. Kaduchak and P. L. Marston, ``Traveling-wave decomposition of surface displacements associated with scattering by a cylindrical shell: Numerical evaluation displaying guided forward and backward wave properties,'' J. Acoust. Soc. Am. 98, 3501-3507 (1995)] to isolate unidirectional energy flows.

  17. An Improved Wavefront Control Algorithm for Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Basinger, Scott A.; Redding, David C.

    2008-01-01

    Wavefront sensing and control is required throughout the mission lifecycle of large space telescopes such as James Webb Space Telescope (JWST). When an optic of such a telescope is controlled with both surface-deforming and rigid-body actuators, the sensitivity-matrix obtained from the exit pupil wavefront vector divided by the corresponding actuator command value can sometimes become singular due to difference in actuator types and in actuator command values. In this paper, we propose a simple approach for preventing a sensitivity-matrix from singularity. We also introduce a new "minimum-wavefront and optimal control compensator". It uses an optimal control gain matrix obtained by feeding back the actuator commands along with the measured or estimated wavefront phase information to the estimator, thus eliminating the actuator modes that are not observable in the wavefront sensing process.

  18. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Ning; Shen, Tielong; Kurtz, Richard

    The properties of nano-scale interstitial dislocation loops under the coupling effect of stress and temperature are studied using atomistic simulation methods and experiments. The decomposition of a loop by the emission of smaller loops is identified as one of the major mechanisms to release the localized stress induced by the coupling effect, which is validated by the TEM observations. The classical conservation law of Burgers vector cannot be applied during such decomposition process. The dislocation network is formed from the decomposed loops, which may initiate the irradiation creep much earlier than expected through the mechanism of climb-controlled glide of dislocations.

  20. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  1. FIDEP2 User Manual to Micromechanical Models for Thermoviscoplastic Behavior of Metal Matrix Composites

    DTIC Science & Technology

    1998-09-01

    1 .AND. ICOUNT .GT. ISTRAIN )GOTO 55 Add additional terms in equations for interface nodes If radial loading is applied, add term BMAT (NTOT-1) = SR...term in bmat Using Bmat , and the L-U decomposition of Amat determine XSOL, the vector of radial and hoop stresses CALL LUBKSB(AMAT,NRA,LDA,IPVT... BMAT ,XSOL) Compute stresses from the XSOL solution vector Use Boundary conditions S(1,NTOT2) = SR S(2,1) = S(1,1) Compute total axial

  2. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  3. A Study on Lebesgue Decomposition of Measures Induced by Stable Processes.

    DTIC Science & Technology

    1987-11-01

    multiple Wiener integrals. Dec. 86. 171. J. Nolan, Local properties of index- O stable fields. Dec. 86. Ann. Probability. to appear. 172. R . Menich and... R ). Ac ? RER\\A;A( A)= o - Thus a SaS process will have admissible translates if it is a mixture of Gaussian processes whose RKHS’s have a common part...trarslates of X, namely -€, V"" ",-". -% LA ? ’,& t,-Jr. , -I. P U n RKHS( R ) c. .I AC3? RER\\AA(^)= o Conversely, if s is a singular translate of a.e

  4. Characterising experimental time series using local intrinsic dimension

    NASA Astrophysics Data System (ADS)

    Buzug, Thorsten M.; von Stamm, Jens; Pfister, Gerd

    1995-02-01

    Experimental strange attractors are analysed with the averaged local intrinsic dimension proposed by A. Passamante et al. [Phys. Rev. A 39 (1989) 3640] which is based on singular value decomposition of local trajectory matrices. The results are compared to the values of Kaplan-Yorke and the correlation dimension. The attractors, reconstructed with Takens' delay time coordinates from scalar velocity time series, are measured in the hydrodynamic Taylor-Couette system. A period doubling route towards chaos obtained from a very short Taylor-Couette cylinder yields a sequence of experimental time series where the local intrinsic dimension is applied.

  5. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    PubMed

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  6. A new analysis of the Fornberg-Whitham equation pertaining to a fractional derivative with Mittag-Leffler-type kernel

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru

    2018-02-01

    The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.

  7. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  8. A geometric approach to problems in birational geometry.

    PubMed

    Chi, Chen-Yu; Yau, Shing-Tung

    2008-12-02

    A classical set of birational invariants of a variety are its spaces of pluricanonical forms and some of their canonically defined subspaces. Each of these vector spaces admits a typical metric structure which is also birationally invariant. These vector spaces so metrized will be referred to as the pseudonormed spaces of the original varieties. A fundamental question is the following: Given two mildly singular projective varieties with some of the first variety's pseudonormed spaces being isometric to the corresponding ones of the second variety's, can one construct a birational map between them that induces these isometries? In this work, a positive answer to this question is given for varieties of general type. This can be thought of as a theorem of Torelli type for birational equivalence.

  9. Extending the length and time scales of Gram-Schmidt Lyapunov vector computations

    NASA Astrophysics Data System (ADS)

    Costa, Anthony B.; Green, Jason R.

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  10. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: General formulation and removal of singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ndong, Mamadou; Lauvergnat, David; Nauts, André

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less

  11. Seismic noise attenuation using an online subspace tracking algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  12. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  13. Improved control of the betatron coupling in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Persson, T.; Tomás, R.

    2014-05-01

    The control of the betatron coupling is of importance for safe beam operation in the LHC. In this article we show recent advancements in methods and algorithms to measure and correct coupling. The benefit of using a more precise formula relating the resonance driving term f1001 to the ΔQmin is presented. The quality of the coupling measurements is increased, with about a factor 3, by selecting beam position monitor (BPM) pairs with phase advances close to π/2 and through data cleaning using singular value decomposition with an optimal number of singular values. These improvements are beneficial for the implemented automatic coupling correction, which is based on injection oscillations, presented in the article. Furthermore, a proposed coupling feedback for the LHC is presented. The system will rely on the measurements from BPMs equipped with a new type of high resolution electronics, diode orbit and oscillation, which will be operational when the LHC restarts in 2015. The feedback will combine the coupling measurements from the available BPMs in order to calculate the best correction.

  14. Modal analysis of 2-D sedimentary basin from frequency domain decomposition of ambient vibration array recordings

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat

    2015-01-01

    Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.

  15. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  16. Evaluation of glioblastomas and lymphomas with whole-brain CT perfusion: Comparison between a delay-invariant singular-value decomposition algorithm and a Patlak plot.

    PubMed

    Hiwatashi, Akio; Togao, Osamu; Yamashita, Koji; Kikuchi, Kazufumi; Yoshimoto, Koji; Mizoguchi, Masahiro; Suzuki, Satoshi O; Yoshiura, Takashi; Honda, Hiroshi

    2016-07-01

    Correction of contrast leakage is recommended when enhancing lesions during perfusion analysis. The purpose of this study was to assess the diagnostic performance of computed tomography perfusion (CTP) with a delay-invariant singular-value decomposition algorithm (SVD+) and a Patlak plot in differentiating glioblastomas from lymphomas. This prospective study included 17 adult patients (12 men and 5 women) with pathologically proven glioblastomas (n=10) and lymphomas (n=7). CTP data were analyzed using SVD+ and a Patlak plot. The relative tumor blood volume and flow compared to contralateral normal-appearing gray matter (rCBV and rCBF derived from SVD+, and rBV and rFlow derived from the Patlak plot) were used to differentiate between glioblastomas and lymphomas. The Mann-Whitney U test and receiver operating characteristic (ROC) analyses were used for statistical analysis. Glioblastomas showed significantly higher rFlow (3.05±0.49, mean±standard deviation) than lymphomas (1.56±0.53; P<0.05). There were no statistically significant differences between glioblastomas and lymphomas in rBV (2.52±1.57 vs. 1.03±0.51; P>0.05), rCBF (1.38±0.41 vs. 1.29±0.47; P>0.05), or rCBV (1.78±0.47 vs. 1.87±0.66; P>0.05). ROC analysis showed the best diagnostic performance with rFlow (Az=0.871), followed by rBV (Az=0.771), rCBF (Az=0.614), and rCBV (Az=0.529). CTP analysis with a Patlak plot was helpful in differentiating between glioblastomas and lymphomas, but CTP analysis with SVD+ was not. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  17. A Partial Least-Squares Analysis of Health-Related Quality-of-Life Outcomes After Aneurysmal Subarachnoid Hemorrhage.

    PubMed

    Young, Julia M; Morgan, Benjamin R; Mišić, Bratislav; Schweizer, Tom A; Ibrahim, George M; Macdonald, R Loch

    2015-12-01

    Individuals who have aneurysmal subarachnoid hemorrhages (SAHs) experience decreased health-related qualities of life (HRQoLs) that persist after the primary insult. To identify clinical variables that concurrently associate with HRQoL outcomes by using a partial least-squares approach, which has the distinct advantage of explaining multidimensional variance where predictor variables may be highly collinear. Data collected from the CONSCIOUS-1 trial was used to extract 29 clinical variables including SAH presentation, hospital procedures, and demographic information in addition to 5 HRQoL outcome variables for 256 individuals. A partial least-squares analysis was performed by calculating a heterogeneous correlation matrix and applying singular value decomposition to determine components that best represent the correlations between the 2 sets of variables. Bootstrapping was used to estimate statistical significance. The first 2 components accounting for 81.6% and 7.8% of the total variance revealed significant associations between clinical predictors and HRQoL outcomes. The first component identified associations between disability in self-care with longer durations of critical care stay, invasive intracranial monitoring, ventricular drain time, poorer clinical grade on presentation, greater amounts of cerebral spinal fluid drainage, and a history of hypertension. The second component identified associations between disability due to pain and discomfort as well as anxiety and depression with greater body mass index, abnormal heart rate, longer durations of deep sedation and critical care, and higher World Federation of Neurosurgical Societies and Hijdra scores. By applying a data-driven, multivariate approach, we identified robust associations between SAH clinical presentations and HRQoL outcomes. EQ-VAS, EuroQoL visual analog scaleHRQoL, health-related quality of lifeICU, intensive care unitIVH, intraventricular hemorrhagePLS, partial least squaresSAH, subarachnoid hemorrhageSVD, singular value decompositionWFNS, World Federation of Neurosurgical Societies.

  18. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Topology of three-dimensional separated flows

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Peake, D. J.

    1981-01-01

    Based on the hypothesis that patterns of skin-friction lines and external streamlines reflect the properties of continuous vector fields, topology rules define a small number of singular points (nodes, saddle points, and foci) that characterize the patterns on the surface and on particular projections of the flow (e.g., the crossflow plane). The restricted number of singular points and the rules that they obey are considered as an organizing principle whose finite number of elements can be combined in various ways to connect together the properties common to all steady three dimensional viscous flows. Introduction of a distinction between local and global properties of the flow resolves an ambiguity in the proper definition of a three dimensional separated flow. Adoption of the notions of topological structure, structural stability, and bifurcation provides a framework to describe how three dimensional separated flows originate and succeed each other as the relevant parameters of the problem are varied.

  20. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  1. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  2. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  3. Quadrature demultiplexing using a degenerate vector parametric amplifier.

    PubMed

    Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A

    2014-12-01

    We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.

  4. Research on the application of a decoupling algorithm for structure analysis

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1980-01-01

    The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.

  5. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  6. Effect of bait decomposition on the attractiveness to species of Diptera of veterinary and forensic importance in a rainforest fragment in Brazil.

    PubMed

    Oliveira, Diego L; Soares, Thiago F; Vasconcelos, Simão D

    2016-01-01

    Insects associated with carrion can have parasitological importance as vectors of several pathogens and causal agents of myiasis to men and to domestic and wild animals. We tested the attractiveness of animal baits (chicken liver) at different stages of decomposition to necrophagous species of Diptera (Calliphoridae, Fanniidae, Muscidae, Phoridae and Sarcophagidae) in a rainforest fragment in Brazil. Five types of bait were used: fresh and decomposed at room temperature (26 °C) for 24, 48, 72 and 96 h. A positive correlation was detected between the time of decomposition and the abundance of Calliphoridae and Muscidae, whilst the abundance of adults of Phoridae decreased with the time of decomposition. Ten species of calliphorids were registered, of which Chrysomya albiceps, Chrysomya megacephala and Chloroprocta idioidea showed a positive significant correlation between abundance and decomposition. Specimens of Sarcophagidae and Fanniidae did not discriminate between fresh and highly decomposed baits. A strong female bias was registered for all species of Calliphoridae irrespective of the type of bait. The results reinforce the feasibility of using animal tissues as attractants to a wide diversity of dipterans of medical, parasitological and forensic importance in short-term surveys, especially using baits at intermediate stages of decomposition.

  7. Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Y.

    2015-12-01

    A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.

  8. Time-Frequency Analysis And Pattern Recognition Using Singular Value Decomposition Of The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Lovell, Brian; White, Langford

    1988-01-01

    Time-Frequency analysis based on the Wigner-Ville Distribution (WVD) is shown to be optimal for a class of signals where the variation of instantaneous frequency is the dominant characteristic. Spectral resolution and instantaneous frequency tracking is substantially improved by using a Modified WVD (MWVD) based on an Autoregressive spectral estimator. Enhanced signal-to-noise ratio may be achieved by using 2D windowing in the Time-Frequency domain. The WVD provides a tool for deriving descriptors of signals which highlight their FM characteristics. These descriptors may be used for pattern recognition and data clustering using the methods presented in this paper.

  9. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  10. Noise Equalization for Ultrafast Plane Wave Microvessel Imaging.

    PubMed

    Song, Pengfei; Manduca, Armando; Trzasko, Joshua D; Chen, Shigao

    2017-11-01

    Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enable more robust clutter filtering based on singular value decomposition. However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This paper was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation induced) and microvessel blood flow signal and 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality.

  11. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  12. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2017-05-09

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  13. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2014-10-28

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  14. Detection and identification of concealed weapons using matrix pencil

    NASA Astrophysics Data System (ADS)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  15. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  16. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  17. An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Pappa, R. S.

    1985-01-01

    A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.

  18. Compressed Continuous Computation v. 12/20/2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorodetsky, Alex

    2017-02-17

    A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.

  19. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  20. Quantum Linear System Algorithm for Dense Matrices.

    PubMed

    Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam

    2018-02-02

    Solving linear systems of equations is a frequently encountered problem in machine learning and optimization. Given a matrix A and a vector b the task is to find the vector x such that Ax=b. We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of O(κ^{2}sqrt[n]polylog(n)/ε) for an n×n dimensional A with bounded spectral norm, where κ denotes the condition number of A, and ε is the desired precision parameter. This amounts to a polynomial improvement over known quantum linear system algorithms when applied to dense matrices, and poses a new state of the art for solving dense linear systems on a quantum computer. Furthermore, an exponential improvement is achievable if the rank of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows of A and the vector of Euclidean norms of the rows of A.

Top