Sample records for singular values decomposition

  1. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  2. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  3. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  4. How long the singular value decomposed entropy predicts the stock market? - Evidence from the Dow Jones Industrial Average Index

    NASA Astrophysics Data System (ADS)

    Gu, Rongbao; Shao, Yanmin

    2016-07-01

    In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.

  5. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  6. Object detection with a multistatic array using singular value decomposition

    DOEpatents

    Hallquist, Aaron T.; Chambers, David H.

    2014-07-01

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.

  7. A robust watermarking scheme using lifting wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh

    2017-01-01

    The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.

  8. Decomposition of the Multistatic Response Matrix and Target Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambers, D H

    2008-02-14

    Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less

  9. Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.

  10. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  11. Polar and singular value decomposition of 3×3 magic squares

    NASA Astrophysics Data System (ADS)

    Trenkler, Götz; Schmidt, Karsten; Trenkler, Dietrich

    2013-07-01

    In this note, we find polar as well as singular value decompositions of a 3×3 magic square, i.e. a 3×3 matrix M with real elements where each row, column and diagonal adds up to the magic sum s of the magic square.

  12. A Systolic Architecture for Singular Value Decomposition,

    DTIC Science & Technology

    1983-01-01

    Presented at the 1 st International Colloquium on Vector and Parallel Computing in Scientific Applications, Paris, March 191J Contract N00014-82-K.0703...Gene Golub. Private comunication . given inputs x and n 2 , compute 2 2 2 2 /6/ G. H. Golub and F. T. Luk : "Singular Value I + X1 Decomposition

  13. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  14. Using Singular Value Decomposition to Investigate Degraded Chinese Character Recognition: Evidence from Eye Movements during Reading

    ERIC Educational Resources Information Center

    Wang, Hsueh-Cheng; Schotter, Elizabeth R.; Angele, Bernhard; Yang, Jinmian; Simovici, Dan; Pomplun, Marc; Rayner, Keith

    2013-01-01

    Previous research indicates that removing initial strokes from Chinese characters makes them harder to read than removing final or internal ones. In the present study, we examined the contribution of important components to character configuration via singular value decomposition. The results indicated that when the least important segments, which…

  15. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  16. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  17. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  18. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  19. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  20. Singular-value decomposition of a tomosynthesis system

    PubMed Central

    Burvall, Anna; Barrett, Harrison H.; Myers, Kyle J.; Dainty, Christopher

    2010-01-01

    Tomosynthesis is an emerging technique with potential to replace mammography, since it gives 3D information at a relatively small increase in dose and cost. We present an analytical singular-value decomposition of a tomosynthesis system, which provides the measurement component of any given object. The method is demonstrated on an example object. The measurement component can be used as a reconstruction of the object, and can also be utilized in future observer studies of tomosynthesis image quality. PMID:20940966

  1. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  2. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  3. Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation

    DTIC Science & Technology

    1987-12-01

    residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter

  4. An Efficient and Robust Singular Value Method for Star Pattern Recognition and Attitude Determination

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.

    2003-01-01

    A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.

  5. Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value Decomposition (SVD)

    DTIC Science & Technology

    2017-09-27

    ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...originator. ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...unlimited. October 2015–January 2016 US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005-5066 primary author’s email

  6. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  7. Recurrence quantity analysis based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Bian, Songhan; Shang, Pengjian

    2017-05-01

    Recurrence plot (RP) has turned into a powerful tool in many different sciences in the last three decades. To quantify the complexity and structure of RP, recurrence quantification analysis (RQA) has been developed based on the measures of recurrence density, diagonal lines, vertical lines and horizontal lines. This paper will study the RP based on singular value decomposition which is a new perspective of RP study. Principal singular value proportion (PSVP) will be proposed as one new RQA measure and bigger PSVP means higher complexity for one system. In contrast, smaller PSVP reflects a regular and stable system. Considering the advantage of this method in detecting the complexity and periodicity of systems, several simulation and real data experiments are chosen to examine the performance of this new RQA.

  8. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  9. The predictive power of singular value decomposition entropy for stock market dynamics

    NASA Astrophysics Data System (ADS)

    Caraiani, Petre

    2014-01-01

    We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.

  10. A singular value decomposition approach for improved taxonomic classification of biological sequences

    PubMed Central

    2011-01-01

    Background Singular value decomposition (SVD) is a powerful technique for information retrieval; it helps uncover relationships between elements that are not prima facie related. SVD was initially developed to reduce the time needed for information retrieval and analysis of very large data sets in the complex internet environment. Since information retrieval from large-scale genome and proteome data sets has a similar level of complexity, SVD-based methods could also facilitate data analysis in this research area. Results We found that SVD applied to amino acid sequences demonstrates relationships and provides a basis for producing clusters and cladograms, demonstrating evolutionary relatedness of species that correlates well with Linnaean taxonomy. The choice of a reasonable number of singular values is crucial for SVD-based studies. We found that fewer singular values are needed to produce biologically significant clusters when SVD is employed. Subsequently, we developed a method to determine the lowest number of singular values and fewest clusters needed to guarantee biological significance; this system was developed and validated by comparison with Linnaean taxonomic classification. Conclusions By using SVD, we can reduce uncertainty concerning the appropriate rank value necessary to perform accurate information retrieval analyses. In tests, clusters that we developed with SVD perfectly matched what was expected based on Linnaean taxonomy. PMID:22369633

  11. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  12. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  13. Non-Cooperative Target Recognition by Means of Singular Value Decomposition Applied to Radar High Resolution Range Profiles †

    PubMed Central

    López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio

    2015-01-01

    Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484

  14. Statistical Analysis of the Ionosphere based on Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Arikan, Feza; Necat Deviren, M.; Toker, Cenk

    2016-07-01

    Ionosphere is made up of a spatio-temporally varying trend structure and secondary variations due to solar, geomagnetic, gravitational and seismic activities. Hence, it is important to monitor the ionosphere and acquire up-to-date information about its state in order both to better understand the physical phenomena that cause the variability and also to predict the effect of the ionosphere on HF and satellite communications, and satellite-based positioning systems. To charaterise the behaviour of the ionosphere, we propose to apply Singular Value Decomposition (SVD) to Total Electron Content (TEC) maps obtained from the TNPGN-Active (Turkish National Permanent GPS Network) CORS network. TNPGN-Active network consists of 146 GNSS receivers spread over Turkey. IONOLAB-TEC values estimated from each station are spatio-temporally interpolated using a Universal Kriging based algorithm with linear trend, namely IONOLAB-MAP, with very high spatial resolution. It is observed that the dominant singular value of TEC maps is an indicator of the trend structure of the ionosphere. The diurnal, seasonal and annual variability of the most dominant value is the representation of solar effect on ionosphere in midlatitude range. Secondary and smaller singular values are indicators of secondary variation which can have significance especially during geomagnetic storms or seismic disturbances. The dominant singular values are related to the physical basis vectors where ionosphere can be fully reconstructed using these vectors. Therefore, the proposed method can be used both for the monitoring of the current state of a region and also for the prediction and tracking of future states of ionosphere using singular values and singular basis vectors. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  15. Signal evaluations using singular value decomposition for Thomson scattering diagnostics.

    PubMed

    Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K

    2014-11-01

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  16. Signal evaluations using singular value decomposition for Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.

    2014-11-15

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  17. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  18. Incorporation of perceptually adaptive QIM with singular value decomposition for blind audio watermarking

    NASA Astrophysics Data System (ADS)

    Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan

    2014-12-01

    This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks.

  19. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography

    PubMed Central

    Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.

    2010-01-01

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566

  20. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  1. A novel image watermarking method based on singular value decomposition and digital holography

    NASA Astrophysics Data System (ADS)

    Cai, Zhishan

    2016-10-01

    According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.

  2. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    NASA Astrophysics Data System (ADS)

    Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-04-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.

  3. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    PubMed Central

    Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-01-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683

  4. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  5. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  6. Singular-value demodulation of phase-shifted holograms.

    PubMed

    Lopes, Fernando; Atlan, Michael

    2015-06-01

    We report on phase-shifted holographic interferogram demodulation by singular-value decomposition. Numerical processing of optically acquired interferograms over several modulation periods was performed in two steps: (1) rendering of off-axis complex-valued holograms by Fresnel transformation of the interferograms; and (2) eigenvalue spectrum assessment of the lag-covariance matrix of hologram pixels. Experimental results in low-light recording conditions were compared with demodulation by Fourier analysis, in the presence of random phase drifts.

  7. Singular spectrum and singular entropy used in signal processing of NC table

    NASA Astrophysics Data System (ADS)

    Wang, Linhong; He, Yiwen

    2011-12-01

    NC (numerical control) table is a complex dynamic system. The dynamic characteristics caused by backlash, friction and elastic deformation among each component are so complex that they have become the bottleneck of enhancing the positioning accuracy, tracking accuracy and dynamic behavior of NC table. This paper collects vibration acceleration signals from NC table, analyzes the signals with SVD (singular value decomposition) method, acquires the singular spectrum and calculates the singular entropy of the signals. The signal characteristics and their regulations of NC table are revealed via the characteristic quantities such as singular spectrum, singular entropy etc. The steep degrees of singular spectrums can be used to discriminate complex degrees of signals. The results show that the signals in direction of driving axes are the simplest and the signals in perpendicular direction are the most complex. The singular entropy values can be used to study the indetermination of signals. The results show that the signals of NC table are not simple signal nor white noise, the entropy values in direction of driving axe are lower, the entropy values increase along with the increment of driving speed and the entropy values at the abnormal working conditions such as resonance or creeping etc decrease obviously.

  8. Glove-based approach to online signature verification.

    PubMed

    Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A

    2008-06-01

    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.

  9. Boosting brain connectome classification accuracy in Alzheimer's disease using higher-order singular value decomposition

    PubMed Central

    Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.

    2015-01-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601

  10. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  11. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  12. Optical character recognition with feature extraction and associative memory matrix

    NASA Astrophysics Data System (ADS)

    Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa

    1998-06-01

    A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.

  13. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  14. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  16. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  17. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  18. Boosting Classification Accuracy of Diffusion MRI Derived Brain Networks for the Subtypes of Mild Cognitive Impairment Using Higher Order Singular Value Decomposition

    PubMed Central

    Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.

    2015-01-01

    Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202

  19. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  20. Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Cost Prediction via Quantitative Analysis of Complexity in U.S. Navy Shipbuilding

    DTIC Science & Technology

    2014-06-01

    in regards to the analysis of advanced sensors and weaponry, the summation of singular values via a singular value decomposition will be used in the...In the DDG 51 class, the Main Reduction Gear (MRG) reduces the 3600-RPM produced by the LM-2500 gas turbines to approximately 168-RPM (at full...RDT&E efforts are currently underway to reduce complexity of the MCS by developing a wireless approach that will concurrently boost the host ship’s

  2. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  3. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  4. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  5. Singular value decomposition approach to the yttrium occurrence in mineral maps of rare earth element ores using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Romppanen, Sari; Häkkänen, Heikki; Kaski, Saara

    2017-08-01

    Laser-induced breakdown spectroscopy (LIBS) has been used in analysis of rare earth element (REE) ores from the geological formation of Norra Kärr Alkaline Complex in southern Sweden. Yttrium has been detected in eudialyte (Na15 Ca6(Fe,Mn)3 Zr3Si(Si25O73)(O,OH,H2O)3 (OH,Cl)2) and catapleiite (Ca/Na2ZrSi3O9·2H2O). Singular value decomposition (SVD) has been employed in classification of the minerals in the rock samples and maps representing the mineralogy in the sampled area have been constructed. Based on the SVD classification the percentage of the yttrium-bearing ore minerals can be calculated even in fine-grained rock samples.

  6. Using Rényi parameter to improve the predictive power of singular value decomposition entropy on stock market

    NASA Astrophysics Data System (ADS)

    Jiang, Jiaqi; Gu, Rongbao

    2016-04-01

    This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.

  7. Singular value decomposition of received ultrasound signal to separate tissue, blood flow, and cavitation signals

    NASA Astrophysics Data System (ADS)

    Ikeda, Hayato; Nagaoka, Ryo; Lafond, Maxime; Yoshizawa, Shin; Iwasaki, Ryosuke; Maeda, Moe; Umemura, Shin-ichiro; Saijo, Yoshifumi

    2018-07-01

    High-intensity focused ultrasound is a noninvasive treatment applied by externally irradiating ultrasound to the body to coagulate the target tissue thermally. Recently, it has been proposed as a noninvasive treatment for vascular occlusion to replace conventional invasive treatments. Cavitation bubbles generated by the focused ultrasound can accelerate the effect of thermal coagulation. However, the tissues surrounding the target may be damaged by cavitation bubbles generated outside the treatment area. Conventional methods based on Doppler analysis only in the time domain are not suitable for monitoring blood flow in the presence of cavitation. In this study, we proposed a novel filtering method based on the differences in spatiotemporal characteristics, to separate tissue, blood flow, and cavitation by employing singular value decomposition. Signals from cavitation and blood flow were extracted automatically using spatial and temporal covariance matrices.

  8. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  9. Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-02-03

    and tae have the same right singular vectors , and their singular-value decompositions can be written as tab = uabsabv †, (30) tae = uaesaev †, (31...freedom such as polarization or spatial modes), making its implementation ideal for fiber optics networks. (iii) The protocol promises unprecedented...well as temporal correlations. In particular, using 8 wavelength channels for an additional 3 bpp and two polarization states for one additional bpp

  10. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  11. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  12. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  13. Multiset singular value decomposition for joint analysis of multi-modal data: application to fingerprint analysis

    NASA Astrophysics Data System (ADS)

    Emge, Darren K.; Adalı, Tülay

    2014-06-01

    As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.

  14. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    NASA Astrophysics Data System (ADS)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  15. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  16. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  17. Holographic entanglement entropy in Suzuki-Trotter decomposition of spin systems.

    PubMed

    Matsueda, Hiroaki

    2012-03-01

    In quantum spin chains at criticality, two types of scaling for the entanglement entropy exist: one comes from conformal field theory (CFT), and the other is for entanglement support of matrix product state (MPS) approximation. On the other hand, the quantum spin-chain models can be mapped onto two-dimensional (2D) classical ones by the Suzuki-Trotter decomposition. Motivated by the scaling and the mapping, we introduce information entropy for 2D classical spin configurations as well as a spectrum, and examine their basic properties in the Ising and the three-state Potts models on the square lattice. They are defined by the singular values of the reduced density matrix for a Monte Carlo snapshot. We find scaling relations of the entropy compatible with the CFT and the MPS results. Thus, we propose that the entropy is a kind of "holographic" entanglement entropy. At T(c), the spin configuration is fractal, and various sizes of ordered clusters coexist. Then, the singular values automatically decompose the original snapshot into a set of images with different length scales, respectively. This is the origin of the scaling. In contrast to the MPS scaling, long-range spin correlation can be described by only few singular values. Furthermore, the spectrum, which is a set of logarithms of the singular values, also seems to be a holographic entanglement spectrum. We find multiple gaps in the spectrum, and in contrast to the topological phases, the low-lying levels below the gap represent spontaneous symmetry breaking. These contrasts are strong evidence of the dual nature of the holography. Based on these observations, we discuss the amount of information contained in one snapshot.

  18. On the use of the singular value decomposition for text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husbands, P.; Simon, H.D.; Ding, C.

    2000-12-04

    The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large documentmore » collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.« less

  19. Operational modal analysis using SVD of power spectral density transmissibility matrices

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Laier, Jose Elias

    2014-05-01

    This paper proposes the singular value decomposition of power spectrum density transmissibility matrices with different references, (PSDTM-SVD), as an identification method of natural frequencies and mode shapes of a dynamic system subjected to excitations under operational conditions. At the system poles, the rows of the proposed transmissibility matrix converge to the same ratio of amplitudes of vibration modes. As a result, the matrices are linearly dependent on the columns, and their singular values converge to zero. Singular values are used to determine the natural frequencies, and the first left singular vectors are used to estimate mode shapes. A numerical example of the finite element model of a beam subjected to colored noise excitation is analyzed to illustrate the accuracy of the proposed method. Results of the PSDTM-SVD method in the numerical example are compared with obtained using frequency domain decomposition (FDD) and power spectrum density transmissibility (PSDT). It is demonstrated that the proposed method does not depend on the excitation characteristics contrary to the FDD method that assumes white noise excitation, and further reduces the risk to identify extra non-physical poles in comparison to the PSDT method. Furthermore, a case study is performed using data from an operational vibration test of a bridge with a simply supported beam system. The real application of a full-sized bridge has shown that the proposed PSDTM-SVD method is able to identify the operational modal parameter. Operational modal parameters identified by the PSDTM-SVD in the real application agree well those identified by the FDD and PSDT methods.

  20. DUALITY IN MULTIVARIATE RECEPTOR MODEL. (R831078)

    EPA Science Inventory

    Multivariate receptor models are used for source apportionment of multiple observations of compositional data of air pollutants that obey mass conservation. Singular value decomposition of the data leads to two sets of eigenvectors. One set of eigenvectors spans a space in whi...

  1. Prediction of monthly-seasonal precipitation using coupled SVD patterns between soil moisture and subsequent precipitation

    Treesearch

    Yongqiang Liu

    2003-01-01

    It was suggested in a recent statistical correlation analysis that predictability of monthly-seasonal precipitation could be improved by using coupled singular value decomposition (SVD) pattems between soil moisture and precipitation instead of their values at individual locations. This study provides predictive evidence for this suggestion by comparing skills of two...

  2. A NEW GUI FOR GLOBAL ORBIT CORRECTION AT THE ALS USING MATLAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pachikara, J.; Portmann, G.

    2007-01-01

    Orbit correction is a vital procedure at particle accelerators around the world. The orbit correction routine currently used at the Advanced Light Source (ALS) is a bit cumbersome and a new Graphical User Interface (GUI) has been developed using MATLAB. The correction algorithm uses a singular value decomposition method for calculating the required corrector magnet changes for correcting the orbit. The application has been successfully tested at the ALS. The GUI display provided important information regarding the orbit including the orbit errors before and after correction, the amount of corrector magnet strength change, and the standard deviation of the orbitmore » error with respect to the number of singular values used. The use of more singular values resulted in better correction of the orbit error but at the expense of enormous corrector magnet strength changes. The results showed an inverse relationship between the peak-to-peak values of the orbit error and the number of singular values used. The GUI interface helps the ALS physicists and operators understand the specifi c behavior of the orbit. The application is convenient to use and is a substantial improvement over the previous orbit correction routine in terms of user friendliness and compactness.« less

  3. Light focusing through a multiple scattering medium: ab initio computer simulation

    NASA Astrophysics Data System (ADS)

    Danko, Oleksandr; Danko, Volodymyr; Kovalenko, Andrey

    2018-01-01

    The present study considers ab initio computer simulation of the light focusing through a complex scattering medium. The focusing is performed by shaping the incident light beam in order to obtain a small focused spot on the opposite side of the scattering layer. MSTM software (Auburn University) is used to simulate the propagation of an arbitrary monochromatic Gaussian beam and obtain 2D distribution of the optical field in the selected plane of the investigated volume. Based on the set of incident and scattered fields, the pair of right and left eigen bases and corresponding singular values were calculated. The pair of right and left eigen modes together with the corresponding singular value constitute the transmittance eigen channel of the disordered media. Thus, the scattering process is described in three steps: 1) initial field decomposition in the right eigen basis; 2) scaling of decomposition coefficients for the corresponding singular values; 3) assembling of the scattered field as the composition of the weighted left eigen modes. Basis fields are represented as a linear combination of the original Gaussian beams and scattered fields. It was demonstrated that 60 independent control channels provide focusing the light into a spot with the minimal radius of approximately 0.4 μm at half maximum. The intensity enhancement in the focal plane was equal to 68 that coincided with theoretical prediction.

  4. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition

    PubMed Central

    Lv, Yong; Song, Gangbing

    2018-01-01

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510

  5. Multi-Fault Diagnosis of Rolling Bearings via Adaptive Projection Intrinsically Transformed Multivariate Empirical Mode Decomposition and High Order Singular Value Decomposition.

    PubMed

    Yuan, Rui; Lv, Yong; Song, Gangbing

    2018-04-16

    Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.

  6. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  7. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  8. Asymmetric color image encryption based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping

    2017-02-01

    A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.

  9. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  10. Singular value decomposition based impulsive noise reduction in multi-frequency phase-sensitive demodulation of electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Hao, Zhenhua; Cui, Ziqiang; Yue, Shihong; Wang, Huaxiang

    2018-06-01

    As an important means in electrical impedance tomography (EIT), multi-frequency phase-sensitive demodulation (PSD) can be viewed as a matched filter for measurement signals and as an optimal linear filter in the case of Gaussian-type noise. However, the additive noise usually possesses impulsive noise characteristics, so it is a challenging task to reduce the impulsive noise in multi-frequency PSD effectively. In this paper, an approach for impulsive noise reduction in multi-frequency PSD of EIT is presented. Instead of linear filters, a singular value decomposition filter is employed as the pre-stage filtering module prior to PSD, which has advantages of zero phase shift, little distortion, and a high signal-to-noise ratio (SNR) in digital signal processing. Simulation and experimental results demonstrated that the proposed method can effectively eliminate the influence of impulsive noise in multi-frequency PSD, and it was capable of achieving a higher SNR and smaller demodulation error.

  11. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  12. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    PubMed

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  13. Sliding window denoising K-Singular Value Decomposition and its application on rolling bearing impact fault diagnosis

    NASA Astrophysics Data System (ADS)

    Yang, Honggang; Lin, Huibin; Ding, Kang

    2018-05-01

    The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.

  14. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  15. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  16. Factor Analytic Approach to Transitive Text Mining using Medline Descriptors

    NASA Astrophysics Data System (ADS)

    Stegmann, J.; Grohmann, G.

    Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.

  17. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  18. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  19. Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions

    DTIC Science & Technology

    2012-03-22

    with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems

  20. The Rigid Orthogonal Procrustes Rotation Problem

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    2006-01-01

    The problem of rotating a matrix orthogonally to a best least squares fit with another matrix of the same order has a closed-form solution based on a singular value decomposition. The optimal rotation matrix is not necessarily rigid, but may also involve a reflection. In some applications, only rigid rotations are permitted. Gower (1976) has…

  1. Constraint elimination in dynamical systems

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  2. Data Mining in Earth System Science (DMESS 2011)

    Treesearch

    Forrest M. Hoffman; J. Walter Larson; Richard Tran Mills; Bhorn-Gustaf Brooks; Auroop R. Ganguly; William Hargrove; et al

    2011-01-01

    From field-scale measurements to global climate simulations and remote sensing, the growing body of very large and long time series Earth science data are increasingly difficult to analyze, visualize, and interpret. Data mining, information theoretic, and machine learning techniques—such as cluster analysis, singular value decomposition, block entropy, Fourier and...

  3. A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.

    PubMed

    Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei

    2015-11-01

    A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.

  4. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  5. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  6. Characterization of an elastic target in a shallow water waveguide by decomposition of the time-reversal operator.

    PubMed

    Philippe, Franck D; Prada, Claire; de Rosny, Julien; Clorennec, Dominique; Minonzio, Jean-Gabriel; Fink, Mathias

    2008-08-01

    This paper reports the results of an investigation into extracting of the backscattered frequency signature of a target in a waveguide. Retrieving the target signature is difficult because it is blurred by waveguide reflections and modal interference. It is shown that the decomposition of the time-reversal operator method provides a solution to this problem. Using a modal theory, this paper shows that the first singular value associated with a target is proportional to the backscattering form function. It is linked to the waveguide geometry through a factor that weakly depends on frequency as long as the target is far from the boundaries. Using the same approach, the second singular value is shown to be proportional to the second derivative of the angular form function which is a relevant parameter for target identification. Within this framework the coupling between two targets is considered. Small scale experimental studies are performed in the 3.5 MHz frequency range for 3 mm spheres in a 28 mm deep and 570 mm long waveguide and confirm the theoretical results.

  7. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  8. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  9. Application of generalized singular value decomposition to ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Bhuyan, K.; Singh, S.; Bhuyan, P.

    2004-10-01

    The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD) based algorithm. Model ionospheric total electron content (TEC) data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA), in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.

  10. Shortened Mean Transit Time in CT Perfusion With Singular Value Decomposition Analysis in Acute Cerebral Infarction: Quantitative Evaluation and Comparison With Various CT Perfusion Parameters.

    PubMed

    Murayama, Kazuhiro; Katada, Kazuhiro; Hayakawa, Motoharu; Toyama, Hiroshi

    We aimed to clarify the cause of shortened mean transit time (MTT) in acute ischemic cerebrovascular disease and examined its relationship with reperfusion. Twenty-three patients with acute ischemic cerebrovascular disease underwent whole-brain computed tomography perfusion (CTP). The maximum MTT (MTTmax), minimum MTT (MTTmin), ratio of maximum and minimum MTT (MTTmin/max), and minimum cerebral blood volume (CBV) (CBVmin) were measured by automatic region of interest analysis. Diffusion weighted image was performed to calculate infarction volume. We compared these CTP parameters between reperfusion and nonreperfusion groups and calculated correlation coefficients between the infarction core volume and CTP parameters. Significant differences were observed between reperfusion and nonreperfusion groups (MTTmin/max: P = 0.014; CBVmin ratio: P = 0.038). Regression analysis of CTP and high-intensity volume on diffusion weighted image showed negative correlation (CBVmin ratio: r = -0.41; MTTmin/max: r = -0.30; MTTmin ratio: r = -0.27). A region of shortened MTT indicated obstructed blood flow, which was attributed to the singular value decomposition method error.

  11. North Pacific warming and intense northwestern U.S. wildfires

    Treesearch

    Yongqiang Liu

    2006-01-01

    The tropical Pacific sea surface temperature (SST) anomalies such as La Nina have been an important predictor for wildfires in the southeastern and southwestern U.S. This study seeks seasonal predictors for wildfires in the northwestern U.S., a region with the most intense wildfires among various continental U.S. regions. Singular value decomposition and regression...

  12. Implicit-shifted Symmetric QR Singular Value Decomposition of 3x3 Matrices

    DTIC Science & Technology

    2016-04-01

    Graph 33, 4, 138:1– 138:11. TREFETHEN, L. N., AND BAU III, D. 1997. Numerical linear algebra , vol. 50. Siam. XU, H., SIN, F., ZHU, Y., AND BARBIČ, J...matrices with minimal branching and elementary floating point operations. Tech. rep., University of Wisconsin- Madison. SAITO, S., ZHOU, Z.-Y., AND

  13. Spatial patterns of soil moisture connected to monthly-seasonal precipitation variability in a monsoon region

    Treesearch

    Yongqiang Liu

    2003-01-01

    The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...

  14. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  15. Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.

    PubMed

    Gong, Ping; Kolios, Michael C; Xu, Yuan

    2016-09-01

    Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.

  16. Singular value decomposition utilizing parallel algorithms on graphical processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less

  17. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  18. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  19. A pipeline VLSI design of fast singular value decomposition processor for real-time EEG system based on on-line recursive independent component analysis.

    PubMed

    Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi

    2013-01-01

    This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.

  20. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  1. Active listening room compensation for massive multichannel sound reproduction systems using wave-domain adaptive filtering.

    PubMed

    Spors, Sascha; Buchner, Herbert; Rabenstein, Rudolf; Herbordt, Wolfgang

    2007-07-01

    The acoustic theory for multichannel sound reproduction systems usually assumes free-field conditions for the listening environment. However, their performance in real-world listening environments may be impaired by reflections at the walls. This impairment can be reduced by suitable compensation measures. For systems with many channels, active compensation is an option, since the compensating waves can be created by the reproduction loudspeakers. Due to the time-varying nature of room acoustics, the compensation signals have to be determined by an adaptive system. The problems associated with the successful operation of multichannel adaptive systems are addressed in this contribution. First, a method for decoupling the adaptation problem is introduced. It is based on a generalized singular value decomposition and is called eigenspace adaptive filtering. Unfortunately, it cannot be implemented in its pure form, since the continuous adaptation of the generalized singular value decomposition matrices to the variable room acoustics is numerically very demanding. However, a combination of this mathematical technique with the physical description of wave propagation yields a realizable multichannel adaptation method with good decoupling properties. It is called wave domain adaptive filtering and is discussed here in the context of wave field synthesis.

  2. Retrieval of Enterobacteriaceae drug targets using singular value decomposition.

    PubMed

    Silvério-Machado, Rita; Couto, Bráulio R G M; Dos Santos, Marcos A

    2015-04-15

    The identification of potential drug target proteins in bacteria is important in pharmaceutical research for the development of new antibiotics to combat bacterial agents that cause diseases. A new model that combines the singular value decomposition (SVD) technique with biological filters composed of a set of protein properties associated with bacterial drug targets and similarity to protein-coding essential genes of Escherichia coli (strain K12) has been created to predict potential antibiotic drug targets in the Enterobacteriaceae family. This model identified 99 potential drug target proteins in the studied family, which exhibit eight different functions and are protein-coding essential genes or similar to protein-coding essential genes of E.coli (strain K12), indicating that the disruption of the activities of these proteins is critical for cells. Proteins from bacteria with described drug resistance were found among the retrieved candidates. These candidates have no similarity to the human proteome, therefore exhibiting the advantage of causing no adverse effects or at least no known adverse effects on humans. rita_silverio@hotmail.com. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  4. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  5. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  6. Normal forms of Hopf-zero singularity

    NASA Astrophysics Data System (ADS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  7. A technique for plasma velocity-space cross-correlation

    NASA Astrophysics Data System (ADS)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  8. Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data

    NASA Astrophysics Data System (ADS)

    Mora, P.; Spies, M.

    2018-05-01

    We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.

  9. Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction

    NASA Astrophysics Data System (ADS)

    Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.

    2002-11-01

    The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.

  10. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  11. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  12. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  13. Spectral and entropic characterizations of Wigner functions: applications to model vibrational systems.

    PubMed

    Luzanov, A V

    2008-09-07

    The Wigner function for the pure quantum states is used as an integral kernel of the non-Hermitian operator K, to which the standard singular value decomposition (SVD) is applied. It provides a set of the squared singular values treated as probabilities of the individual phase-space processes, the latter being described by eigenfunctions of KK(+) (for coordinate variables) and K(+)K (for momentum variables). Such a SVD representation is employed to obviate the well-known difficulties in the definition of the phase-space entropy measures in terms of the Wigner function that usually allows negative values. In particular, the new measures of nonclassicality are constructed in the form that automatically satisfies additivity for systems composed of noninteracting parts. Furthermore, the emphasis is given on the geometrical interpretation of the full entropy measure as the effective phase-space volume in the Wigner picture of quantum mechanics. The approach is exemplified by considering some generic vibrational systems. Specifically, for eigenstates of the harmonic oscillator and a superposition of coherent states, the singular value spectrum is evaluated analytically. Numerical computations are given for the nonlinear problems (the Morse and double well oscillators, and the Henon-Heiles system). We also discuss the difficulties in implementation of a similar technique for electronic problems.

  14. Network Monitoring Traffic Compression Using Singular Value Decomposition

    DTIC Science & Technology

    2014-03-27

    Shootouts." Workshop on Intrusion Detection and Network Monitoring. 1999. [12] Goodall , John R. "Visualization is better! a comparative evaluation...34 Visualization for Cyber Security, 2009. VizSec 2009. 6th International Workshop on IEEE, 2009. [13] Goodall , John R., and Mark Sowul. "VIAssist...Viruses and Log Visualization.” In Australian Digital Forensics Conference. Paper 54, 2008. [30] Tesone, Daniel R., and John R. Goodall . "Balancing

  15. Oxygen Measurements in Liposome Encapsulated Hemoglobin

    NASA Astrophysics Data System (ADS)

    Phiri, Joshua Benjamin

    Liposome encapsulated hemoglobins (LEH's) are of current interest as blood substitutes. An analytical methodology for rapid non-invasive measurements of oxygen in artificial oxygen carriers is examined. High resolution optical absorption spectra are calculated by means of a one dimensional diffusion approximation. The encapsulated hemoglobin is prepared from fresh defibrinated bovine blood. Liposomes are prepared from hydrogenated soy phosphatidylcholine (HSPC), cholesterol and dicetylphosphate using a bath sonication method. An integrating sphere spectrophotometer is employed for diffuse optics measurements. Data is collected using an automated data acquisition system employing lock-in -amplifiers. The concentrations of hemoglobin derivatives are evaluated from the corresponding extinction coefficients using a numerical technique of singular value decomposition, and verification of the results is done using Monte Carlo simulations. In situ measurements are required for the determination of hemoglobin derivatives because most encapsulation methods invariably lead to the formation of methemoglobin, a nonfunctional form of hemoglobin. The methods employed in this work lead to high resolution absorption spectra of oxyhemoglobin and other derivatives in red blood cells and liposome encapsulated hemoglobin (LEH). The analysis using singular value decomposition method offers a quantitative means of calculating the fractions of oxyhemoglobin and other hemoglobin derivatives in LEH samples. The analytical methods developed in this work will become even more useful when production of LEH as a blood substitute is scaled up to large volumes.

  16. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  17. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  18. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  19. The incorrect usage of singular spectral analysis and discrete wavelet transform in hybrid models to predict hydrological time series

    NASA Astrophysics Data System (ADS)

    Du, Kongchang; Zhao, Ying; Lei, Jiaqiang

    2017-09-01

    In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.

  20. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  1. Spreading Sequence System for Full Connectivity Relay Network

    NASA Technical Reports Server (NTRS)

    Kwon, Hyuck M. (Inventor); Pham, Khanh D. (Inventor); Yang, Jie (Inventor)

    2018-01-01

    Fully connected uplink and downlink fully connected relay network systems using pseudo-noise spreading and despreading sequences subjected to maximizing the signal-to-interference-plus-noise ratio. The relay network systems comprise one or more transmitting units, relays, and receiving units connected via a communication network. The transmitting units, relays, and receiving units each may include a computer for performing the methods and steps described herein and transceivers for transmitting and/or receiving signals. The computer encodes and/or decodes communication signals via optimum adaptive PN sequences found by employing Cholesky decompositions and singular value decompositions (SVD). The PN sequences employ channel state information (CSI) to more effectively and more securely computing the optimal sequences.

  2. Numerical linear algebra in data mining

    NASA Astrophysics Data System (ADS)

    Eldén, Lars

    Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.

  3. Multifractality in Cardiac Dynamics

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Rosenblum, Misha; Stanley, H. Eugene; Havlin, Shlomo; Goldberger, Ary

    1997-03-01

    Wavelet decomposition is used to analyze the fractal scaling properties of heart beat time series. The singularity spectrum D(h) of the variations in the beat-to-beat intervals is obtained from the wavelet transform modulus maxima which contain information on the hierarchical distribution of the singularities in the signal. Multifractal behavior is observed for healthy cardiac dynamics while pathologies are associated with loss of support in the singularity spectrum.

  4. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  5. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  6. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  7. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  8. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  9. Embedding Dimension Selection for Adaptive Singular Spectrum Analysis of EEG Signal.

    PubMed

    Xu, Shanzhi; Hu, Hai; Ji, Linhong; Wang, Peng

    2018-02-26

    The recorded electroencephalography (EEG) signal is often contaminated with different kinds of artifacts and noise. Singular spectrum analysis (SSA) is a powerful tool for extracting the brain rhythm from a noisy EEG signal. By analyzing the frequency characteristics of the reconstructed component (RC) and the change rate in the trace of the Toeplitz matrix, it is demonstrated that the embedding dimension is related to the frequency bandwidth of each reconstructed component, in consistence with the component mixing in the singular value decomposition step. A method for selecting the embedding dimension is thereby proposed and verified by simulated EEG signal based on the Markov Process Amplitude (MPA) EEG Model. Real EEG signal is also collected from the experimental subjects under both eyes-open and eyes-closed conditions. The experimental results show that based on the embedding dimension selection method, the alpha rhythm can be extracted from the real EEG signal by the adaptive SSA, which can be effectively utilized to distinguish between the eyes-open and eyes-closed states.

  10. The principles of quantification applied to in vivo proton MR spectroscopy.

    PubMed

    Helms, Gunther

    2008-08-01

    Following the identification of metabolite signals in the in vivo MR spectrum, quantification is the procedure to estimate numerical values of their concentrations. The two essential steps are discussed in detail: analysis by fitting a model of prior knowledge, that is, the decomposition of the spectrum into the signals of singular metabolites; then, normalization of these signals to yield concentration estimates. Special attention is given to using the in vivo water signal as internal reference.

  11. Spectral Estimation: An Overdetermined Rational Model Equation Approach.

    DTIC Science & Technology

    1982-09-15

    A-A123 122 SPECTRAL ESTIMATION: AN OVERDETERMINEO RATIONAL MODEL 1/2 EQUATION APPROACH..(U) ARIZONA STATE UNIV TEMPE DEPT OF ELECTRICAL AND COMPUTER...2 0 447,_______ 4. TITLE (mAd Sabile) S. TYPE or REPORT a PEP40D COVERED Spectral Estimation; An Overdeteruined Rational Final Report 9/3 D/8 to...andmmd&t, by uwek 7a5 4 Rational Spectral Estimation, ARMA mo~Ie1, AR model, NMA Mdle, Spectrum, Singular Value Decomposition. Adaptivb Implementatlan

  12. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  13. Characterization of agricultural land using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Herries, Graham M.; Danaher, Sean; Selige, Thomas

    1995-11-01

    A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.

  14. Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena

    2008-07-08

    We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less

  15. Through Wall Radar Classification of Human Micro-Doppler Using Singular Value Decomposition Analysis

    PubMed Central

    Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin

    2016-01-01

    The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques. PMID:27589760

  16. Through Wall Radar Classification of Human Micro-Doppler Using Singular Value Decomposition Analysis.

    PubMed

    Ritchie, Matthew; Ash, Matthew; Chen, Qingchao; Chetty, Kevin

    2016-08-31

    The ability to detect the presence as well as classify the activities of individuals behind visually obscuring structures is of significant benefit to police, security and emergency services in many situations. This paper presents the analysis from a series of experimental results generated using a through-the-wall (TTW) Frequency Modulated Continuous Wave (FMCW) C-Band radar system named Soprano. The objective of this analysis was to classify whether an individual was carrying an item in both hands or not using micro-Doppler information from a FMCW sensor. The radar was deployed at a standoff distance, of approximately 0.5 m, outside a residential building and used to detect multiple people walking within a room. Through the application of digital filtering, it was shown that significant suppression of the primary wall reflection is possible, significantly enhancing the target signal to clutter ratio. Singular Value Decomposition (SVD) signal processing techniques were then applied to the micro-Doppler signatures from different individuals. Features from the SVD information have been used to classify whether the person was carrying an item or walking free handed. Excellent performance of the classifier was achieved in this challenging scenario with accuracies up to 94%, suggesting that future through wall radar sensors may have the ability to reliably recognize many different types of activities in TTW scenarios using these techniques.

  17. Absorption spectrum analysis based on singular value decomposition for photoisomerization and photodegradation in organic dyes

    NASA Astrophysics Data System (ADS)

    Kawabe, Yutaka; Yoshikawa, Toshio; Chida, Toshifumi; Tada, Kazuhiro; Kawamoto, Masuki; Fujihara, Takashi; Sassa, Takafumi; Tsutsumi, Naoto

    2015-10-01

    In order to analyze the spectra of inseparable chemical mixtures, many mathematical methods have been developed to decompose them into the components relevant to species from series of spectral data obtained under different conditions. We formulated a method based on singular value decomposition (SVD) of linear algebra, and applied it to two example systems of organic dyes, being successful in reproducing absorption spectra assignable to cis/trans azocarbazole dyes from the spectral data after photoisomerization and to monomer/dimer of cyanine dyes from those during photodegaradation process. For the example of photoisomerization, polymer films containing the azocarbazole dyes were prepared, which have showed updatable holographic stereogram for real images with high performance. We made continuous monitoring of absorption spectrum after optical excitation and found that their spectral shapes varied slightly after the excitation and during recovery process, of which fact suggested the contribution from a generated photoisomer. Application of the method was successful to identify two spectral components due to trans and cis forms of azocarbazoles. Temporal evolution of their weight factors suggested important roles of long lifetimed cis states in azocarbazole derivatives. We also applied the method to the photodegradation of cyanine dyes doped in DNA-lipid complexes which have shown efficient and durable optical amplification and/or lasing under optical pumping. The same SVD method was successful in the extraction of two spectral components presumably due to monomer and H-type dimer. During the photodegradation process, absorption magnitude gradually decreased due to decomposition of molecules and their decaying rates strongly depended on the spectral components, suggesting that the long persistency of the dyes in DNA-complex related to weak tendency of aggregate formation.

  18. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  19. Characterising experimental time series using local intrinsic dimension

    NASA Astrophysics Data System (ADS)

    Buzug, Thorsten M.; von Stamm, Jens; Pfister, Gerd

    1995-02-01

    Experimental strange attractors are analysed with the averaged local intrinsic dimension proposed by A. Passamante et al. [Phys. Rev. A 39 (1989) 3640] which is based on singular value decomposition of local trajectory matrices. The results are compared to the values of Kaplan-Yorke and the correlation dimension. The attractors, reconstructed with Takens' delay time coordinates from scalar velocity time series, are measured in the hydrodynamic Taylor-Couette system. A period doubling route towards chaos obtained from a very short Taylor-Couette cylinder yields a sequence of experimental time series where the local intrinsic dimension is applied.

  20. SVD analysis of Aura TES spectral residuals

    NASA Technical Reports Server (NTRS)

    Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.

    2005-01-01

    Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.

  1. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  2. Tomographic diffractive microscopy with agile illuminations for imaging targets in a noisy background.

    PubMed

    Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K

    2015-02-15

    Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.

  3. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  4. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Analysis and modelling of septic shock microarray data using Singular Value Decomposition.

    PubMed

    Allanki, Srinivas; Dixit, Madhulika; Thangaraj, Paul; Sinha, Nandan Kumar

    2017-06-01

    Being a high throughput technique, enormous amounts of microarray data has been generated and there arises a need for more efficient techniques of analysis, in terms of speed and accuracy. Finding the differentially expressed genes based on just fold change and p-value might not extract all the vital biological signals that occur at a lower gene expression level. Besides this, numerous mathematical models have been generated to predict the clinical outcome from microarray data, while very few, if not none, aim at predicting the vital genes that are important in a disease progression. Such models help a basic researcher narrow down and concentrate on a promising set of genes which leads to the discovery of gene-based therapies. In this article, as a first objective, we have used the lesser known and used Singular Value Decomposition (SVD) technique to build a microarray data analysis tool that works with gene expression patterns and intrinsic structure of the data in an unsupervised manner. We have re-analysed a microarray data over the clinical course of Septic shock from Cazalis et al. (2014) and have shown that our proposed analysis provides additional information compared to the conventional method. As a second objective, we developed a novel mathematical model that predicts a set of vital genes in the disease progression that works by generating samples in the continuum between health and disease, using a simple normal-distribution-based random number generator. We also verify that most of the predicted genes are indeed related to septic shock. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. On the singular perturbations for fractional differential equation.

    PubMed

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.

  7. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  8. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  9. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  10. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  11. Analytic wave solution with helicon and Trivelpiece-Gould modes in an annular plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Johan; Pavarin, Daniele; Walker, Mitchell

    2009-11-26

    Helicon sources in an annular configuration have applications for plasma thrusters. The theory of Klozenberg et al.[J. P. Klozenberg B. McNamara and P. C. Thonemann, J. Fluid Mech. 21(1965) 545-563] for the propagation and absorption of helicon and Trivelpiece-Gould modes in a cylindrical plasma has been generalized for annular plasmas. Analytic solutions are found also in the annular case, but in the presence of both helicon and Trivelpiece-Gould modes, a heterogeneous linear system of equations must be solved to match the plasma and inner and outer vacuum solutions. The linear system can be ill-conditioned or even exactly singular, leading tomore » a dispersion relation with a discrete set of discontinuities. The coefficients for the analytic solution are calculated by solving the linear system with singular-value decomposition.« less

  12. Characteristic classes, singular embeddings, and intersection homology.

    PubMed

    Cappell, S E; Shaneson, J L

    1987-06-01

    This note announces some results on the relationship between global invariants and local topological structure. The first section gives a local-global formula for Pontrjagin classes or L-classes. The second section describes a corresponding decomposition theorem on the level of complexes of sheaves. A final section mentions some related aspects of "singular knot theory" and the study of nonisolated singularities. Analogous equivariant analogues, with local-global formulas for Atiyah-Singer classes and their relations to G-signatures, will be presented in a future paper.

  13. Optical systolic solutions of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Neuman, C. P.; Casasent, D.

    1984-01-01

    The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.

  14. Impacts of El Niño and El Niño Modoki on the precipitation in Colombia

    NASA Astrophysics Data System (ADS)

    Córdoba Machado, Samir; Palomino Lemus, Reiner; Raquel Gámiz Fortis, Sonia; Castro Díez, Yolanda; Jesús Esteban Parra, María

    2015-04-01

    The influence of the tropical Pacific SST on precipitation in Colombia is examined using 341 stations covering the period 1979-2009. Through a Singular Value Decomposition (SVD) the two main coupled variability modes show SST patterns clearly associated with El Niño (EN) and El Niño Modoki (ENM), respectively, presenting great coupling strength with the corresponding seasonal precipitation modes in Colombia. The results reveal that, mainly in winter and summer, EN and ENM events are associated with a significant rainfall decrease over northern, central, and western Colombia. The opposite effect occurs in some localities during spring, summer, and autumn. The southwestern region of Colombia exhibits an opposite behaviour connected to EN and ENM events during years when both events do not coexist, showing that the seasonal precipitation response is not linear. The Partial Regression Analysis used to quantify separately the influence of the two types of ENSO on seasonal precipitation shows the importance of both types in the reconstruction process. The results obtained in this study establish the base for modeling and forecasting the seasonal precipitation in Colombia using the tropical Pacific SST associated with El Niño and El Niño Modoki. Keywords: Seasonal precipitation, Tropical Pacific SST, El Niño, El Niño Modoki, Singular Value Decomposition, Colombia. ACKNOWLEDGEMENTS This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).

  15. Method of assessing the state of a rolling bearing based on the relative compensation distance of multiple-domain features and locally linear embedding

    NASA Astrophysics Data System (ADS)

    Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.

    2017-03-01

    To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.

  16. Assessing protein conformational sampling methods based on bivariate lag-distributions of backbone angles

    PubMed Central

    Maadooliat, Mehdi; Huang, Jianhua Z.

    2013-01-01

    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations. PMID:22926831

  17. New solution decomposition and minimization schemes for Poisson-Boltzmann equation in calculation of biomolecular electrostatics

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan

    2014-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.

  18. SU-G-JeP4-03: Anomaly Detection of Respiratory Motion by Use of Singular Spectrum Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotoku, J; Kumagai, S; Nakabayashi, S

    Purpose: The implementation and realization of automatic anomaly detection of respiratory motion is a very important technique to prevent accidental damage during radiation therapy. Here, we propose an automatic anomaly detection method using singular value decomposition analysis. Methods: The anomaly detection procedure consists of four parts:1) measurement of normal respiratory motion data of a patient2) calculation of a trajectory matrix representing normal time-series feature3) real-time monitoring and calculation of a trajectory matrix of real-time data.4) calculation of an anomaly score from the similarity of the two feature matrices. Patient motion was observed by a marker-less tracking system using a depthmore » camera. Results: Two types of motion e.g. cough and sudden stop of breathing were successfully detected in our real-time application. Conclusion: Automatic anomaly detection of respiratory motion using singular spectrum analysis was successful in the cough and sudden stop of breathing. The clinical use of this algorithm will be very hopeful. This work was supported by JSPS KAKENHI Grant Number 15K08703.« less

  19. On the Singular Perturbations for Fractional Differential Equation

    PubMed Central

    Atangana, Abdon

    2014-01-01

    The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method. PMID:24683357

  20. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  1. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  2. Noncolocated Time-Reversal MUSIC: High-SNR Distribution of Null Spectrum

    NASA Astrophysics Data System (ADS)

    Ciuonzo, Domenico; Rossi, Pierluigi Salvo

    2017-04-01

    We derive the asymptotic distribution of the null spectrum of the well-known Multiple Signal Classification (MUSIC) in its computational Time-Reversal (TR) form. The result pertains to a single-frequency non-colocated multistatic scenario and several TR-MUSIC variants are here investigated. The analysis builds upon the 1st-order perturbation of the singular value decomposition and allows a simple characterization of null-spectrum moments (up to the 2nd order). This enables a comparison in terms of spectrums stability. Finally, a numerical analysis is provided to confirm the theoretical findings.

  3. Notes on implementation of Coulomb friction in coupled dynamical simulations

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.; Singh, R. P.

    1987-01-01

    A coupled dynamical system is defined as an assembly of rigid/flexible bodies that may be coupled by kinematic connections. The interfaces between bodies are modeled using hinges having 0 to 6 degrees of freedom. The equations of motion are presented for a mechanical system of n flexible bodies in a topological tree configuration. The Lagrange form of the D'Alembert principle was employed to derive the equations. The equations of motion are augmented by the kinematic constraint equations. This augmentation is accomplished via the method of singular value decomposition.

  4. Explosion Source Similarity Analysis via SVD

    NASA Astrophysics Data System (ADS)

    Yedlin, Matthew; Ben Horin, Yochai; Margrave, Gary

    2016-04-01

    An important seismological ingredient for establishing a regional seismic nuclear discriminant is the similarity analysis of a sequence of explosion sources. To investigate source similarity, we are fortunate to have access to a sequence of 1805 three-component recordings of quarry blasts, shot from March 2002 to January 2015. The centroid of these blasts has an estimated location 36.3E and 29.9N. All blasts were detonated by JPMC (Jordan Phosphate Mines Co.) All data were recorded at the Israeli NDC, HFRI, located at 30.03N and 35.03E. Data were first winnowed based on the distribution of maximum amplitudes in the neighborhood of the P-wave arrival. The winnowed data were then detrended using the algorithm of Cleveland et al (1990). The detrended data were bandpass filtered between .1 to 12 Hz using an eighth order Butterworth filter. Finally, data were sorted based on maximum trace amplitude. Two similarity analysis approaches were used. First, for each component, the entire suite of traces was decomposed into its eigenvector representation, by employing singular-valued decomposition (SVD). The data were then reconstructed using 10 percent of the singular values, with the resulting enhancement of the S-wave and surface wave arrivals. The results of this first method are then compared to the second analysis method based on the eigenface decomposition analysis of Turk and Pentland (1991). While both methods yield similar results in enhancement of data arrivals and reduction of data redundancy, more analysis is required to calibrate the recorded data to charge size, a quantity that was not available for the current study. References Cleveland, R. B., Cleveland, W. S., McRae, J. E., and Terpenning, I., Stl: A seasonal-trend decomposition procedure based on loess, Journal of Official Statistics, 6, No. 1, 3-73, 1990. Turk, M. and Pentland, A., Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1), 71-86, 1991.

  5. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  6. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)

    2002-01-01

    Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.

  7. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  8. Improved control of the betatron coupling in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Persson, T.; Tomás, R.

    2014-05-01

    The control of the betatron coupling is of importance for safe beam operation in the LHC. In this article we show recent advancements in methods and algorithms to measure and correct coupling. The benefit of using a more precise formula relating the resonance driving term f1001 to the ΔQmin is presented. The quality of the coupling measurements is increased, with about a factor 3, by selecting beam position monitor (BPM) pairs with phase advances close to π/2 and through data cleaning using singular value decomposition with an optimal number of singular values. These improvements are beneficial for the implemented automatic coupling correction, which is based on injection oscillations, presented in the article. Furthermore, a proposed coupling feedback for the LHC is presented. The system will rely on the measurements from BPMs equipped with a new type of high resolution electronics, diode orbit and oscillation, which will be operational when the LHC restarts in 2015. The feedback will combine the coupling measurements from the available BPMs in order to calculate the best correction.

  9. Two-stage decompositions for the analysis of functional connectivity for fMRI with application to Alzheimer’s disease risk

    PubMed Central

    Caffo, Brian S.; Crainiceanu, Ciprian M.; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H.; Bassett, Susan Spear; Pekar, James J.

    2010-01-01

    Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer’s disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer’s disease risk under a verbal paired associates task. We found a indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, that was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. PMID:20227508

  10. Two-stage decompositions for the analysis of functional connectivity for fMRI with application to Alzheimer's disease risk.

    PubMed

    Caffo, Brian S; Crainiceanu, Ciprian M; Verduzco, Guillermo; Joel, Suresh; Mostofsky, Stewart H; Bassett, Susan Spear; Pekar, James J

    2010-07-01

    Functional connectivity is the study of correlations in measured neurophysiological signals. Altered functional connectivity has been shown to be associated with a variety of cognitive and memory impairments and dysfunction, including Alzheimer's disease. In this manuscript we use a two-stage application of the singular value decomposition to obtain data driven population-level measures of functional connectivity in functional magnetic resonance imaging (fMRI). The method is computationally simple and amenable to high dimensional fMRI data with large numbers of subjects. Simulation studies suggest the ability of the decomposition methods to recover population brain networks and their associated loadings. We further demonstrate the utility of these decompositions in a functional logistic regression model. The method is applied to a novel fMRI study of Alzheimer's disease risk under a verbal paired associates task. We found an indication of alternative connectivity in clinically asymptomatic at-risk subjects when compared to controls, which was not significant in the light of multiple comparisons adjustment. The relevant brain network loads primarily on the temporal lobe and overlaps significantly with the olfactory areas and temporal poles. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  11. [Analyzing consumer preference by using the latest semantic model for verbal protocol].

    PubMed

    Tamari, Yuki; Takemura, Kazuhisa

    2012-02-01

    This paper examines consumers' preferences for competing brands by using a preference model of verbal protocols. Participants were 150 university students, who reported their opinions and feelings about McDonalds and Mos Burger (competing hamburger restaurants in Japan). Their verbal protocols were analyzed by using the singular value decomposition method, and the latent decision frames were estimated. The verbal protocols having a large value in the decision frames could be interpreted as showing attributes that consumers emphasize. Based on the estimated decision frames, we predicted consumers' preferences using the logistic regression analysis method. The results indicate that the decision frames projected from the verbal protocol data explained consumers' preferences effectively.

  12. Control of complex dynamics and chaos in distributed parameter systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakravarti, S.; Marek, M.; Ray, W.H.

    This paper discusses a methodology for controlling complex dynamics and chaos in distributed parameter systems. The reaction-diffusion system with Brusselator kinetics, where the torus-doubling or quasi-periodic (two characteristic incommensurate frequencies) route to chaos exists in a defined range of parameter values, is used as an example. Poincare maps are used for characterization of quasi-periodic and chaotic attractors. The dominant modes or topos, which are inherent properties of the system, are identified by means of the Singular Value Decomposition. Tested modal feedback control schemas based on identified dominant spatial modes confirm the possibility of stabilization of simple quasi-periodic trajectories in themore » complex quasi-periodic or chaotic spatiotemporal patterns.« less

  13. Understanding perception of active noise control system through multichannel EEG analysis.

    PubMed

    Bagha, Sangeeta; Tripathy, R K; Nanda, Pranati; Preetam, C; Das, Debi Prasad

    2018-06-01

    In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% ( p < 0.001 ) and 99.31% ( p < 0.001 ), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.

  14. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  15. Singularity analysis based on wavelet transform of fractal measures for identifying geochemical anomaly in mineral exploration

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2016-02-01

    Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.

  16. An examination of the concept of driving point receptance

    NASA Astrophysics Data System (ADS)

    Sheng, X.; He, Y.; Zhong, T.

    2018-04-01

    In the field of vibration, driving point receptance is a well-established and widely applied concept. However, as demonstrated in this paper, when a driving point receptance is calculated using the finite element (FE) method with solid elements, it does not converge as the FE mesh becomes finer, suggesting that there is a singularity. Hence, the concept of driving point receptance deserves a rigorous examination. In this paper, it is firstly shown that, for a point harmonic force applied on the surface of an elastic half-space, the Boussinesq formula can be applied to calculate the displacement amplitude of the surface if the response point is sufficiently close to the load. Secondly, by applying the Betti reciprocal theorem, it is shown that the displacement of an elastic body near a point harmonic force can be decomposed into two parts, with the first one being the displacement of an elastic half-space. This decomposition is useful, since it provides a solid basis for the introduction of a contact spring between a wheel and a rail in interaction. However, according to the Boussinesq formula, this decomposition also leads to the conclusion that a driving point receptance is infinite (singular), and would be undefinable. Nevertheless, driving point receptances have been calculated using different methods. Since the singularity identified in this paper was not appreciated, no account was given to the singularity in these calculations. Thus, the validity of these calculation methods must be examined. This constructs the third part of the paper. As the final development of the paper, the above decomposition is utilised to define and determine driving point receptances required for dealing with wheel/rail interactions.

  17. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  18. Use of principle velocity patterns in the analysis of structural acoustic optimization.

    PubMed

    Johnson, Wayne M; Cunefare, Kenneth A

    2007-02-01

    This work presents an application of principle velocity patterns in the analysis of the structural acoustic design optimization of an eight ply composite cylindrical shell. The approach consists of performing structural acoustic optimizations of a composite cylindrical shell subject to external harmonic monopole excitation. The ply angles are used as the design variables in the optimization. The results of the ply angle design variable formulation are interpreted using the singular value decomposition of the interior acoustic potential energy. The decomposition of the acoustic potential energy provides surface velocity patterns associated with lower levels of interior noise. These surface velocity patterns are shown to correspond to those from the structural acoustic optimization results. Thus, it is demonstrated that the capacity to design multi-ply composite cylinders for quiet interiors is determined by how well the cylinder be can designed to exhibit particular surface velocity patterns associated with lower noise levels.

  19. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  20. Singular value description of a digital radiographic detector: Theory and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyprianou, Iacovos S.; Badano, Aldo; Gallas, Brandon D.

    The H operator represents the deterministic performance of any imaging system. For a linear, digital imaging system, this system operator can be written in terms of a matrix, H, that describes the deterministic response of the system to a set of point objects. A singular value decomposition of this matrix results in a set of orthogonal functions (singular vectors) that form the system basis. A linear combination of these vectors completely describes the transfer of objects through the linear system, where the respective singular values associated with each singular vector describe the magnitude with which that contribution to the objectmore » is transferred through the system. This paper is focused on the measurement, analysis, and interpretation of the H matrix for digital x-ray detectors. A key ingredient in the measurement of the H matrix is the detector response to a single x ray (or infinitestimal x-ray beam). The authors have developed a method to estimate the 2D detector shift-variant, asymmetric ray response function (RRF) from multiple measured line response functions (LRFs) using a modified edge technique. The RRF measurements cover a range of x-ray incident angles from 0 deg. (equivalent location at the detector center) to 30 deg. (equivalent location at the detector edge) for a standard radiographic or cone-beam CT geometric setup. To demonstrate the method, three beam qualities were tested using the inherent, Lu/Er, and Yb beam filtration. The authors show that measures using the LRF, derived from an edge measurement, underestimate the system's performance when compared with the H matrix derived using the RRF. Furthermore, the authors show that edge measurements must be performed at multiple directions in order to capture rotational asymmetries of the RRF. The authors interpret the results of the H matrix SVD and provide correlations with the familiar MTF methodology. Discussion is made about the benefits of the H matrix technique with regards to signal detection theory, and the characterization of shift-variant imaging systems.« less

  1. A non-orthogonal decomposition of flows into discrete events

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Lewalle, Jacques

    1998-11-01

    This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.

  2. The relationship between two fast/slow analysis techniques for bursting oscillations

    PubMed Central

    Teka, Wondimu; Tabak, Joël; Bertram, Richard

    2012-01-01

    Bursting oscillations in excitable systems reflect multi-timescale dynamics. These oscillations have often been studied in mathematical models by splitting the equations into fast and slow subsystems. Typically, one treats the slow variables as parameters of the fast subsystem and studies the bifurcation structure of this subsystem. This has key features such as a z-curve (stationary branch) and a Hopf bifurcation that gives rise to a branch of periodic spiking solutions. In models of bursting in pituitary cells, we have recently used a different approach that focuses on the dynamics of the slow subsystem. Characteristic features of this approach are folded node singularities and a critical manifold. In this article, we investigate the relationships between the key structures of the two analysis techniques. We find that the z-curve and Hopf bifurcation of the two-fast/one-slow decomposition are closely related to the voltage nullcline and folded node singularity of the one-fast/two-slow decomposition, respectively. They become identical in the double singular limit in which voltage is infinitely fast and calcium is infinitely slow. PMID:23278052

  3. Decomposition of Time Scales in Linear Systems and Markovian Decision Processes.

    DTIC Science & Technology

    1980-11-01

    this research. I, 3 iv U TABLE OF CONTENTS *Chapter Page *-1. INTRODUCTION .................................................. 1 2. EIGENSTRUCTTJRE...Components ..... o....... 16 2.4. Ordering of State Variables.. ......... ........ 20 2.5. Example - 8th Order Power System Model................ 22 3 ...results. In Chapter 3 we consider the time scale decomposition of singularly perturbed systems. For this problem (1.1) takes the form 12 + u (1.4) 2

  4. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  5. High quality high spatial resolution functional classification in low dose dynamic CT perfusion using singular value decomposition (SVD) and k-means clustering

    NASA Astrophysics Data System (ADS)

    Pisana, Francesco; Henzler, Thomas; Schönberg, Stefan; Klotz, Ernst; Schmidt, Bernhard; Kachelrieß, Marc

    2017-03-01

    Dynamic CT perfusion acquisitions are intrinsically high-dose examinations, due to repeated scanning. To keep radiation dose under control, relatively noisy images are acquired. Noise is then further enhanced during the extraction of functional parameters from the post-processing of the time attenuation curves of the voxels (TACs) and normally some smoothing filter needs to be employed to better visualize any perfusion abnormality, but sacrificing spatial resolution. In this study we propose a new method to detect perfusion abnormalities keeping both high spatial resolution and high CNR. To do this we first perform the singular value decomposition (SVD) of the original noisy spatial temporal data matrix to extract basis functions of the TACs. Then we iteratively cluster the voxels based on a smoothed version of the three most significant singular vectors. Finally, we create high spatial resolution 3D volumes where to each voxel is assigned a distance from the centroid of each cluster, showing how functionally similar each voxel is compared to the others. The method was tested on three noisy clinical datasets: one brain perfusion case with an occlusion in the left internal carotid, one healthy brain perfusion case, and one liver case with an enhancing lesion. Our method successfully detected all perfusion abnormalities with higher spatial precision when compared to the functional maps obtained with a commercially available software. We conclude this method might be employed to have a rapid qualitative indication of functional abnormalities in low dose dynamic CT perfusion datasets. The method seems to be very robust with respect to both spatial and temporal noise and does not require any special a priori assumption. While being more robust respect to noise and with higher spatial resolution and CNR when compared to the functional maps, our method is not quantitative and a potential usage in clinical routine could be as a second reader to assist in the maps evaluation, or to guide a dataset smoothing before the modeling part.

  6. Atomic-batched tensor decomposed two-electron repulsion integrals

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-01

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  7. Atomic-batched tensor decomposed two-electron repulsion integrals.

    PubMed

    Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove

    2017-04-07

    We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.

  8. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier.

    PubMed

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-11-10

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.

  9. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier

    PubMed Central

    Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang

    2016-01-01

    Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods. PMID:27834902

  10. Evaluation of glioblastomas and lymphomas with whole-brain CT perfusion: Comparison between a delay-invariant singular-value decomposition algorithm and a Patlak plot.

    PubMed

    Hiwatashi, Akio; Togao, Osamu; Yamashita, Koji; Kikuchi, Kazufumi; Yoshimoto, Koji; Mizoguchi, Masahiro; Suzuki, Satoshi O; Yoshiura, Takashi; Honda, Hiroshi

    2016-07-01

    Correction of contrast leakage is recommended when enhancing lesions during perfusion analysis. The purpose of this study was to assess the diagnostic performance of computed tomography perfusion (CTP) with a delay-invariant singular-value decomposition algorithm (SVD+) and a Patlak plot in differentiating glioblastomas from lymphomas. This prospective study included 17 adult patients (12 men and 5 women) with pathologically proven glioblastomas (n=10) and lymphomas (n=7). CTP data were analyzed using SVD+ and a Patlak plot. The relative tumor blood volume and flow compared to contralateral normal-appearing gray matter (rCBV and rCBF derived from SVD+, and rBV and rFlow derived from the Patlak plot) were used to differentiate between glioblastomas and lymphomas. The Mann-Whitney U test and receiver operating characteristic (ROC) analyses were used for statistical analysis. Glioblastomas showed significantly higher rFlow (3.05±0.49, mean±standard deviation) than lymphomas (1.56±0.53; P<0.05). There were no statistically significant differences between glioblastomas and lymphomas in rBV (2.52±1.57 vs. 1.03±0.51; P>0.05), rCBF (1.38±0.41 vs. 1.29±0.47; P>0.05), or rCBV (1.78±0.47 vs. 1.87±0.66; P>0.05). ROC analysis showed the best diagnostic performance with rFlow (Az=0.871), followed by rBV (Az=0.771), rCBF (Az=0.614), and rCBV (Az=0.529). CTP analysis with a Patlak plot was helpful in differentiating between glioblastomas and lymphomas, but CTP analysis with SVD+ was not. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. A Partial Least-Squares Analysis of Health-Related Quality-of-Life Outcomes After Aneurysmal Subarachnoid Hemorrhage.

    PubMed

    Young, Julia M; Morgan, Benjamin R; Mišić, Bratislav; Schweizer, Tom A; Ibrahim, George M; Macdonald, R Loch

    2015-12-01

    Individuals who have aneurysmal subarachnoid hemorrhages (SAHs) experience decreased health-related qualities of life (HRQoLs) that persist after the primary insult. To identify clinical variables that concurrently associate with HRQoL outcomes by using a partial least-squares approach, which has the distinct advantage of explaining multidimensional variance where predictor variables may be highly collinear. Data collected from the CONSCIOUS-1 trial was used to extract 29 clinical variables including SAH presentation, hospital procedures, and demographic information in addition to 5 HRQoL outcome variables for 256 individuals. A partial least-squares analysis was performed by calculating a heterogeneous correlation matrix and applying singular value decomposition to determine components that best represent the correlations between the 2 sets of variables. Bootstrapping was used to estimate statistical significance. The first 2 components accounting for 81.6% and 7.8% of the total variance revealed significant associations between clinical predictors and HRQoL outcomes. The first component identified associations between disability in self-care with longer durations of critical care stay, invasive intracranial monitoring, ventricular drain time, poorer clinical grade on presentation, greater amounts of cerebral spinal fluid drainage, and a history of hypertension. The second component identified associations between disability due to pain and discomfort as well as anxiety and depression with greater body mass index, abnormal heart rate, longer durations of deep sedation and critical care, and higher World Federation of Neurosurgical Societies and Hijdra scores. By applying a data-driven, multivariate approach, we identified robust associations between SAH clinical presentations and HRQoL outcomes. EQ-VAS, EuroQoL visual analog scaleHRQoL, health-related quality of lifeICU, intensive care unitIVH, intraventricular hemorrhagePLS, partial least squaresSAH, subarachnoid hemorrhageSVD, singular value decompositionWFNS, World Federation of Neurosurgical Societies.

  12. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  14. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  15. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  16. Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution

    NASA Astrophysics Data System (ADS)

    Kazei, Vladimir; Alkhalifah, Tariq

    2018-05-01

    Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.

  17. A Higher-Order Generalized Singular Value Decomposition for Comparison of Global mRNA Expression from Multiple Organisms

    PubMed Central

    Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly

    2011-01-01

    The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090

  18. Time-Frequency Analysis And Pattern Recognition Using Singular Value Decomposition Of The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Lovell, Brian; White, Langford

    1988-01-01

    Time-Frequency analysis based on the Wigner-Ville Distribution (WVD) is shown to be optimal for a class of signals where the variation of instantaneous frequency is the dominant characteristic. Spectral resolution and instantaneous frequency tracking is substantially improved by using a Modified WVD (MWVD) based on an Autoregressive spectral estimator. Enhanced signal-to-noise ratio may be achieved by using 2D windowing in the Time-Frequency domain. The WVD provides a tool for deriving descriptors of signals which highlight their FM characteristics. These descriptors may be used for pattern recognition and data clustering using the methods presented in this paper.

  19. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  20. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  1. Noise Equalization for Ultrafast Plane Wave Microvessel Imaging.

    PubMed

    Song, Pengfei; Manduca, Armando; Trzasko, Joshua D; Chen, Shigao

    2017-11-01

    Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enable more robust clutter filtering based on singular value decomposition. However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This paper was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation induced) and microvessel blood flow signal and 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality.

  2. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  3. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2017-05-09

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  4. Method for discovering relationships in data by dynamic quantum clustering

    DOEpatents

    Weinstein, Marvin; Horn, David

    2014-10-28

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  5. Detection and identification of concealed weapons using matrix pencil

    NASA Astrophysics Data System (ADS)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  6. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  7. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  8. An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Pappa, R. S.

    1985-01-01

    A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.

  9. S-matrix decomposition, natural reaction channels, and the quantum transition state approach to reactive scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de

    2016-05-28

    A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less

  10. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  11. Vacuum Magnetic Field Mapping of the Compact Toroidal Hybrid (CTH)

    NASA Astrophysics Data System (ADS)

    Peterson, J. T.; Hanson, J.; Hartwell, G. J.; Knowlton, S. F.; Montgomery, C.; Munoz, J.

    2007-11-01

    Vacuum magnetic field mapping experiments are performed on the CTH torsatron with a movable electron gun and phosphor-coated screen or movable wand at two different toroidal locations. These experiments compare the experimentally measured magnetic configuration produced by the as-built coil set, to the magnetic configuration simulated with the IFT Biot-Savart code using the measured coil set parameters. Efforts to minimize differences between the experimentally measured location of the magnetic axis and its predicted value utilizing a Singular Value Decomposition (SVD) process result in small modifications of the helical coil winding law used to model the vacuum magnetic field geometry of CTH. Because these studies are performed at relatively low fields B = 0.01 - 0.05 T, a uniform ambient magnetic field is included in the minimization procedure.

  12. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  13. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  14. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  15. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  16. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  17. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  18. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  19. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  20. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    PubMed

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  1. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    PubMed Central

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709

  2. The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons

    NASA Astrophysics Data System (ADS)

    Kweon, Jae Ryong

    2017-03-01

    In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.

  3. Seismic noise attenuation using an online subspace tracking algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  4. Sharp bounds for singular values of fractional integral operators

    NASA Astrophysics Data System (ADS)

    Burman, Prabir

    2007-03-01

    From the results of Dostanic [M.R. Dostanic, Asymptotic behavior of the singular values of fractional integral operators, J. Math. Anal. Appl. 175 (1993) 380-391] and Vu and Gorenflo [Kim Tuan Vu, R. Gorenflo, Singular values of fractional and Volterra integral operators, in: Inverse Problems and Applications to Geophysics, Industry, Medicine and Technology, Ho Chi Minh City, 1995, Ho Chi Minh City Math. Soc., Ho Chi Minh City, 1995, pp. 174-185] it is known that the jth singular value of the fractional integral operator of order [alpha]>0 is approximately ([pi]j)-[alpha] for all large j. In this note we refine this result by obtaining sharp bounds for the singular values and use these bounds to show that the jth singular value is ([pi]j)-[alpha][1+O(j-1)].

  5. Predicting responses from Rasch measures.

    PubMed

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  6. Exploring the Common Dynamics of Homologous Proteins. Application to the Globin Family

    PubMed Central

    Maguid, Sandra; Fernandez-Alberti, Sebastian; Ferrelli, Leticia; Echave, Julian

    2005-01-01

    We present a procedure to explore the global dynamics shared between members of the same protein family. The method allows the comparison of patterns of vibrational motion obtained by Gaussian network model analysis. After the identification of collective coordinates that were conserved during evolution, we quantify the common dynamics within a family. Representative vectors that describe these dynamics are defined using a singular value decomposition approach. As a test case, the globin heme-binding family is considered. The two lowest normal modes are shown to be conserved within this family. Our results encourage the development of models for protein evolution that take into account the conservation of dynamical features. PMID:15749782

  7. Letters: Noise Equalization for Ultrafast Plane Wave Microvessel Imaging

    PubMed Central

    Song, Pengfei; Manduca, Armando; Trzasko, Joshua D.

    2017-01-01

    Ultrafast plane wave microvessel imaging significantly improves ultrasound Doppler sensitivity by increasing the number of Doppler ensembles that can be collected within a short period of time. The rich spatiotemporal plane wave data also enables more robust clutter filtering based on singular value decomposition (SVD). However, due to the lack of transmit focusing, plane wave microvessel imaging is very susceptible to noise. This study was designed to: 1) study the relationship between ultrasound system noise (primarily time gain compensation-induced) and microvessel blood flow signal; 2) propose an adaptive and computationally cost-effective noise equalization method that is independent of hardware or software imaging settings to improve microvessel image quality. PMID:28880169

  8. Continuous analogues of matrix factorizations

    PubMed Central

    Townsend, Alex; Trefethen, Lloyd N.

    2015-01-01

    Analogues of singular value decomposition (SVD), QR, LU and Cholesky factorizations are presented for problems in which the usual discrete matrix is replaced by a ‘quasimatrix’, continuous in one dimension, or a ‘cmatrix’, continuous in both dimensions. Two challenges arise: the generalization of the notions of triangular structure and row and column pivoting to continuous variables (required in all cases except the SVD, and far from obvious), and the convergence of the infinite series that define the cmatrix factorizations. Our generalizations of triangularity and pivoting are based on a new notion of a ‘triangular quasimatrix’. Concerning convergence of the series, we prove theorems asserting convergence provided the functions involved are sufficiently smooth. PMID:25568618

  9. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  10. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  11. A miniaturized near infrared spectrometer for non-invasive sensing of bio-markers as a wearable healthcare solution

    NASA Astrophysics Data System (ADS)

    Bae, Jungmok; Druzhin, Vladislav V.; Anikanov, Alexey G.; Afanasyev, Sergey V.; Shchekin, Alexey; Medvedev, Anton S.; Morozov, Alexander V.; Kim, Dongho; Kim, Sang Kyu; Moon, Hyunseok; Jang, Hyeongseok; Shim, Jaewook; Park, Jongae

    2017-02-01

    A novel miniaturized near-infrared spectrometer readily mountable to wearable devices for continuous monitoring of individual's key bio-markers was proposed. Spectrum is measured by sequential illuminations with LED's, having independent spectrum profiles and a continuous detection of light radiations from the skin tissue with a single cell PD. Based on Tikhonov regularization with singular value decomposition, a spectrum resolution less than 10nm was reconstructed based on experimentally measured LED profiles. A prototype covering first overtone band (1500-1800nm) where bio-markers have pronounced absorption peaks was fabricated and verified of its performance. Reconstructed spectrum shows that the novel concept of miniaturized spectrometer is valid.

  12. Electronics and Algorithms for HOM Based Beam Diagnostics

    NASA Astrophysics Data System (ADS)

    Frisch, Josef; Baboi, Nicoleta; Eddy, Nathan; Nagaitsev, Sergei; Hensler, Olaf; McCormick, Douglas; May, Justin; Molloy, Stephen; Napoly, Olivier; Paparella, Rita; Petrosyan, Lyudvig; Ross, Marc; Simon, Claire; Smith, Tonee

    2006-11-01

    The signals from the Higher Order Mode (HOM) ports on superconducting cavities can be used as beam position monitors and to do survey structure alignment. A HOM-based diagnostic system has been installed to instrument both couplers on each of the 40 cryogenic accelerating structures in the DESY TTF2 Linac. The electronics uses a single stage down conversion from the 1.7 GHz HOM spectral line to a 20MHz IF which has been digitized. The electronics is based on low cost surface mount components suitable for large scale production. The analysis of the HOM data is based on Singular Value Decomposition. The response of the OM modes is calibrated using conventional BPMs.

  13. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  14. Selecting appropriate singular values of transmission matrix to improve precision of incident wavefront retrieval

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Du, Jinglei

    2018-06-01

    A method of selecting appropriate singular values of the transmission matrix to improve the precision of incident wavefront retrieval in focusing light through scattering media is proposed. The optimal singular values selected by this method can reduce the degree of ill-conditionedness of the transmission matrix effectively, which indicates that the incident wavefront retrieved from the optimal set of singular values is more accurate than the incident wavefront retrieved from other sets of singular values. The validity of this method is verified by numerical simulation and actual measurements of the incident wavefront of coherent light through ground glass.

  15. Modal analysis of 2-D sedimentary basin from frequency domain decomposition of ambient vibration array recordings

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat

    2015-01-01

    Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.

  16. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  17. SU-E-T-610: Phosphor-Based Fiber Optic Probes for Proton Beam Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darafsheh, A; Soldner, A; Liu, H

    2015-06-15

    Purpose: To investigate feasibility of using fiber optics probes with rare-earth-based phosphor tips for proton beam radiation dosimetry. We designed and fabricated a fiber probe with submillimeter resolution (<0.5 mm3) based on TbF3 phosphors and evaluated its performance for measurement of proton beam including profiles and range. Methods: The fiber optic probe with TbF3 phosphor tip, embedded in tissue-mimicking phantoms was irradiated with double scattering proton beam with energy of 180 MeV. Luminescence spectroscopy was performed by a CCD-coupled spectrograph to analyze the emission spectra of the fiber tip. In order to measure the spatial beam profile and percentage depthmore » dose, we used singular value decomposition method to spectrally separate the phosphors ionoluminescence signal from the background Cerenkov radiation signal. Results: The spectra of the TbF3 fiber probe showed characteristic ionoluminescence emission peaks at 489, 542, 586, and 620 nm. By using singular value decomposition we found the contribution of the ionoluminescence signal to measure the percentage depth dose in phantoms and compared that with measurements performed with ion chamber. We observed quenching effect at the spread out Bragg peak region, manifested as under-responding of the signal, due to the high LET of the beam. However, the beam profiles were not dramatically affected by the quenching effect. Conclusion: We have evaluated the performance of a fiber optic probe with submillimeter resolution for proton beam dosimetry. We demonstrated feasibility of spectral separation of the Cerenkov radiation from the collected signal. Such fiber probes can be used for measurements of proton beams profile and range. The experimental apparatus and spectroscopy method developed in this work provide a robust platform for characterization of proton-irradiated nanophosphor particles for ultralow fluence photodynamic therapy or molecular imaging applications.« less

  18. Multi-tissue analysis of co-expression networks by higher-order generalized singular value decomposition identifies functionally coherent transcriptional modules.

    PubMed

    Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico

    2014-01-01

    Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co-expressed heat shock protein (Hsp) and cardiomyopathy genes (Bag3, Cryab, Kras, Emd, Plec), which was significantly replicated using separate failing heart and liver gene expression datasets in humans, thus revealing a conserved functional role for Hsp genes in cardiovascular disease.

  19. Hybrid method based on singular value decomposition and embedded zero tree wavelet technique for ECG signal compression.

    PubMed

    Kumar, Ranjeet; Kumar, A; Singh, G K

    2016-06-01

    In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Causality analysis of leading singular value decomposition modes identifies rotor as the dominant driving normal mode in fibrillation

    NASA Astrophysics Data System (ADS)

    Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer

    2018-01-01

    Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their preferential driving influence on fibrillatory modes.

  1. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    PubMed Central

    2010-01-01

    Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480

  2. Application of matrix singular value properties for evaluating gain and phase margins of multiloop systems. [stability margins for wing flutter suppression and drone lateral attitude control

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.

    1982-01-01

    A stability margin evaluation method in terms of simultaneous gain and phase changes in all loops of a multiloop system is presented. A universal gain-phase margin evaluation diagram is constructed by generalizing an existing method using matrix singular value properties. Using this diagram and computing the minimum singular value of the system return difference matrix over the operating frequency range, regions of guaranteed stability margins can be obtained. Singular values are computed for a wing flutter suppression and a drone lateral attitude control problem. The numerical results indicate that this method predicts quite conservative stability margins. In the second example if the eigenvalue magnitude is used instead of the singular value, as a measure of nearness to singularity, more realistic stability margins are obtained. However, this relaxed measure generally cannot guarantee global stability.

  3. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. A chemometric method to identify enzymatic reactions leading to the transition from glycolytic oscillations to waves

    NASA Astrophysics Data System (ADS)

    Zimányi, László; Khoroshyy, Petro; Mair, Thomas

    2010-06-01

    In the present work we demonstrate that FTIR-spectroscopy is a powerful tool for the time resolved and noninvasive measurement of multi-substrate/product interactions in complex metabolic networks as exemplified by the oscillating glycolysis in a yeast extract. Based on a spectral library constructed from the pure glycolytic intermediates, chemometric analysis of the complex spectra allowed us the identification of many of these intermediates. Singular value decomposition and multiple level wavelet decomposition were used to separate drifting substances from oscillating ones. This enabled us to identify slow and fast variables of glycolytic oscillations. Most importantly, we can attribute a qualitative change in the positive feedback regulation of the autocatalytic reaction to the transition from homogeneous oscillations to travelling waves. During the oscillatory phase the enzyme phosphofructokinase is mainly activated by its own product ADP, whereas the transition to waves is accompanied with a shift of the positive feedback from ADP to AMP. This indicates that the overall energetic state of the yeast extract determines the transition between spatially homogeneous oscillations and travelling waves.

  5. The effect of inlet swirl on the dynamics of long annular seals in centrifugal pumps

    NASA Technical Reports Server (NTRS)

    Ismail, M.; Brown, R. D.; France, D.

    1994-01-01

    This paper describes additional results from a continuing research program which aims to identify the dynamics of long annular seals in centrifugal pumps. A seal test rig designed at Heriot-Watt University and commissioned at Weir Pumps Research Laboratory in Alloa permits the identification of mass, stiffness, and damping coefficients using a least-squares technique based on the singular value decomposition method. The analysis is carried out in the time domain using a multi-fiequency forcing function. The experimental method relies on the forced excitation of a flexibly supported stator by two hydraulic shakers. Running through the stator embodying two symmetrical balance drum seals is a rigid rotor supported in rolling element bearings. The only physical connection between shaft and stator is the pair of annular gaps filled with pressurized water discharged axially. The experimental coefficients obtained from the tests are compared with theoretical values.

  6. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  7. Antiferromagnetic order in the Hubbard model on the Penrose lattice

    NASA Astrophysics Data System (ADS)

    Koga, Akihisa; Tsunetsugu, Hirokazu

    2017-12-01

    We study an antiferromagnetic order in the ground state of the half-filled Hubbard model on the Penrose lattice and investigate the effects of quasiperiodic lattice structure. In the limit of infinitesimal Coulomb repulsion U →+0 , the staggered magnetizations persist to be finite, and their values are determined by confined states, which are strictly localized with thermodynamics degeneracy. The magnetizations exhibit an exotic spatial pattern, and have the same sign in each of cluster regions, the size of which ranges from 31 sites to infinity. With increasing U , they continuously evolve to those of the corresponding spin model in the U =∞ limit. In both limits of U , local magnetizations exhibit a fairly intricate spatial pattern that reflects the quasiperiodic structure, but the pattern differs between the two limits. We have analyzed this pattern change by a mode analysis by the singular value decomposition method for the fractal-like magnetization pattern projected into the perpendicular space.

  8. Robust image watermarking using DWT and SVD for copyright protection

    NASA Astrophysics Data System (ADS)

    Harjito, Bambang; Suryani, Esti

    2017-02-01

    The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.

  9. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron command-feedback gain by +/- 20 percent. Minimum eigenvalues of the return difference matrix which bound the singular values are also presented. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  10. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron-command-feedback gain by +/- 20 percent. Also presented are the minimum eigenvalues of the return difference matrix which bound the singular values. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  11. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    PubMed

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  12. Predictor-based multivariable closed-loop system identification of the EXTRAP T2R reversed field pinch external plasma response

    NASA Astrophysics Data System (ADS)

    Olofsson, K. Erik J.; Brunsell, Per R.; Rojas, Cristian R.; Drake, James R.; Hjalmarsson, Håkan

    2011-08-01

    The usage of computationally feasible overparametrized and nonregularized system identification signal processing methods is assessed for automated determination of the full reversed-field pinch external plasma response spectrum for the experiment EXTRAP T2R. No assumptions on the geometry of eigenmodes are imposed. The attempted approach consists of high-order autoregressive exogenous estimation followed by Markov block coefficient construction and Hankel matrix singular value decomposition. It is seen that the obtained 'black-box' state-space models indeed can be compared with the commonplace ideal magnetohydrodynamics (MHD) resistive thin-shell model in cylindrical geometry. It is possible to directly map the most unstable autodetected empirical system pole to the corresponding theoretical resistive shell MHD eigenmode.

  13. Physics and control of wall turbulence for drag reduction.

    PubMed

    Kim, John

    2011-04-13

    Turbulence physics responsible for high skin-friction drag in turbulent boundary layers is first reviewed. A self-sustaining process of near-wall turbulence structures is then discussed from the perspective of controlling this process for the purpose of skin-friction drag reduction. After recognizing that key parts of this self-sustaining process are linear, a linear systems approach to boundary-layer control is discussed. It is shown that singular-value decomposition analysis of the linear system allows us to examine different approaches to boundary-layer control without carrying out the expensive nonlinear simulations. Results from the linear analysis are consistent with those observed in full nonlinear simulations, thus demonstrating the validity of the linear analysis. Finally, fundamental performance limit expected of optimal control input is discussed.

  14. SAR measurements of surface displacements at Augustine Volcano, Alaska from 1992 to 2005

    USGS Publications Warehouse

    Lee, C.-W.; Lu, Z.; Kwoun, Oh-Ig

    2007-01-01

    Augustine volcano is an active stratovolcano located at the southwest of Anchorage, Alaska. Augustine volcano had experienced seven significantly explosive eruptions in 1812, 1883, 1908, 1935, 1963, 1976, and 1986, and a minor eruption in January 2006. We measured the surface displacements of the volcano by radar interferometry and GPS before and after the eruption in 2006. ERS-1/2, RADARSAT-1 and ENVISAT SAR data were used for the study. Multiple interferograms were stacked to reduce artifacts caused by different atmospheric conditions. Least square (LS) method was used to reduce atmospheric artifacts. Singular value decomposition (SVD) method was applied for retrieval of time sequential deformations. Satellite radar interferometry helps to understand the surface displacements system of Augustine volcano. ?? 2007 IEEE.

  15. SAR measurements of surface displacements at Augustine Volcano, Alaska from 1992 to 2005

    USGS Publications Warehouse

    Lee, C.-W.; Lu, Z.; Kwoun, Oh-Ig

    2008-01-01

    Augustine volcano is an active stratovolcano located at the southwest of Anchorage, Alaska. Augustine volcano had experienced seven significantly explosive eruptions in 1812, 1883, 1908, 1935, 1963, 1976, and 1986, and a minor eruption in January 2006. We measured the surface displacements of the volcano by radar interferometry and GPS before and after the eruption in 2006. ERS-1/2, RADARSAT-1 and ENVISAT SAR data were used for the study. Multiple interferograms were stacked to reduce artifacts caused by different atmospheric conditions. Least square (LS) method was used to reduce atmospheric artifacts. Singular value decomposition (SVD) method was applied for retrieval of time sequential deformations. Satellite radar interferometry helps to understand the surface displacements system of Augustine volcano. ?? 2007 IEEE.

  16. The construction and assessment of a statistical model for the prediction of protein assay data.

    PubMed

    Pittman, J; Sacks, J; Young, S Stanley

    2002-01-01

    The focus of this work is the development of a statistical model for a bioinformatics database whose distinctive structure makes model assessment an interesting and challenging problem. The key components of the statistical methodology, including a fast approximation to the singular value decomposition and the use of adaptive spline modeling and tree-based methods, are described, and preliminary results are presented. These results are shown to compare favorably to selected results achieved using comparitive methods. An attempt to determine the predictive ability of the model through the use of cross-validation experiments is discussed. In conclusion a synopsis of the results of these experiments and their implications for the analysis of bioinformatic databases in general is presented.

  17. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  18. Regularization strategies for hyperplane classifiers: application to cancer classification with gene expression data.

    PubMed

    Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl

    2007-02-01

    Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.

  19. Statistical analysis of RHIC beam position monitors performance

    NASA Astrophysics Data System (ADS)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  20. Simplex volume analysis for finding endmembers in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.

    2015-05-01

    Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.

  1. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    PubMed

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Low rank magnetic resonance fingerprinting.

    PubMed

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  3. The Production of FRW Universe and Decay to Particles in Multiverse

    NASA Astrophysics Data System (ADS)

    Ghaffary, Tooraj

    2017-09-01

    In this study, first, it will be shown that as the Hubble parameter, " H", increases the production cross section for closed and flat Universes increases rapidly at smaller values of " H" and becomes constant for higher values of " H". However in the case of open Universe, the production cross section has been encountered a singularity. Before this singularity, as the H parameter increases, the cross section increases, for smaller H, ( H < 2.5), exhibits a turn-over at moderate values of H, (2.5 < H < 3.5), decreases for larger amount of H After that and for a special value of H, the cross section has been encountered with a singularity. Although the cross section cannot be defined at this singularity but before and after this point, it is certainly equal to zero. After this singularity, the cross section increases rapidly, when H increases. It is shown that if the production cross section of Universe happens before this singularity, it can't achieve to higher values of Hubble parameter after singularity. More over if the production cross section of Universe situates after the singularity, it won't get access to values of Hubble parameter less than the singularity. After that the thermal distribution for particles inside the FRW Universes are obtained. It is found that a large amount of particles are produced near apparent horizon due to their variety in their energy and their probabilities. Finally, comparing the particle production cross sections for flat, closed and open Universes, it is concluded that as the value of k increases, the cross section decreases.

  4. Fully pseudospectral solution of the conformally invariant wave equation near the cylinder at spacelike infinity. III: nonspherical Schwarzschild waves and singularities at null infinity

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Hennig, Jörg

    2018-03-01

    We extend earlier numerical and analytical considerations of the conformally invariant wave equation on a Schwarzschild background from the case of spherically symmetric solutions, discussed in Frauendiener and Hennig (2017 Class. Quantum Grav. 34 045005), to the case of general, nonsymmetric solutions. A key element of our approach is the modern standard representation of spacelike infinity as a cylinder. With a decomposition into spherical harmonics, we reduce the four-dimensional wave equation to a family of two-dimensional equations. These equations can be used to study the behaviour at the cylinder, where the solutions turn out to have, in general, logarithmic singularities at infinitely many orders. We derive regularity conditions that may be imposed on the initial data, in order to avoid the first singular terms. We then demonstrate that the fully pseudospectral time evolution scheme can be applied to this problem leading to a highly accurate numerical reconstruction of the nonsymmetric solutions. We are particularly interested in the behaviour of the solutions at future null infinity, and we numerically show that the singularities spread to null infinity from the critical set, where the cylinder approaches null infinity. The observed numerical behaviour is consistent with similar logarithmic singularities found analytically on the critical set. Finally, we demonstrate that even solutions with singularities at low orders can be obtained with high accuracy by virtue of a coordinate transformation that converts solutions with logarithmic singularities into smooth solutions.

  5. An asymptotic induced numerical method for the convection-diffusion-reaction equation

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.; Sorensen, Danny C.

    1988-01-01

    A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.

  6. svdPPCS: an effective singular value decomposition-based method for conserved and divergent co-expression gene module identification.

    PubMed

    Zhang, Wensheng; Edwards, Andrea; Fan, Wei; Zhu, Dongxiao; Zhang, Kun

    2010-06-22

    Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods. Using the singular value decomposition (SVD) technique, we developed a new computational tool, named svdPPCS (SVD-based Pattern Pairing and Chart Splitting), to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG) and malpighian tubule (MT) tissues of the line W118 of M. drosophila. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR < 0.1. One divergent module suggested the tissue-specific relationship between the expressions of mitochondrion-related genes and the aging process. This finding, together with others, may be of biological significance. The validity of the proposed SVD-based method was further verified by a simulation study, as well as the comparisons with regression analysis and cubic spline regression analysis plus PAM based clustering. svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time series data of related organisms or different tissues of the same organism under equivalent or similar experimental conditions. The general scheme can be directly extended to the comparisons of multiple data sets. It also can be applied to the integration of data sets from different platforms and of different sources.

  7. Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.

    PubMed

    Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan

    2018-01-12

    Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).

  8. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  9. Optimization methodology for the global 10 Hz orbit feedback in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chuyu; Hulsart, R.; Mernick, K.

    To combat beam oscillations induced by triplet vibrations at the Relativistic Heavy Ion Collider (RHIC), a global orbit feedback system was developed and applied at injection and top energy in 2011, and during beam acceleration in 2012. Singular Value Decomposition (SVD) was employed to determine the strengths and currents of the applied corrections. The feedback algorithm was optimized for different magnetic configurations (lattices) at fixed beam energies and during beam acceleration. While the orbit feedback performed well since its inception, corrector current transients and feedback-induced beam oscillations were observed during the polarized proton program in 2015. In this paper, wemore » present the feedback algorithm, the optimization of the algorithm for various lattices and the solution adopted to mitigate the observed current transients during beam acceleration.« less

  10. MCTDH on-the-fly: Efficient grid-based quantum dynamics without pre-computed potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Richings, Gareth W.; Habershon, Scott

    2018-04-01

    We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.

  11. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  12. Video quality assesment using M-SVD

    NASA Astrophysics Data System (ADS)

    Tao, Peining; Eskicioglu, Ahmet M.

    2007-01-01

    Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.

  13. Investigations on the hierarchy of reference frames in geodesy and geodynamics

    NASA Technical Reports Server (NTRS)

    Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.

    1979-01-01

    Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).

  14. Optimization methodology for the global 10 Hz orbit feedback in RHIC

    DOE PAGES

    Liu, Chuyu; Hulsart, R.; Mernick, K.; ...

    2018-05-08

    To combat beam oscillations induced by triplet vibrations at the Relativistic Heavy Ion Collider (RHIC), a global orbit feedback system was developed and applied at injection and top energy in 2011, and during beam acceleration in 2012. Singular Value Decomposition (SVD) was employed to determine the strengths and currents of the applied corrections. The feedback algorithm was optimized for different magnetic configurations (lattices) at fixed beam energies and during beam acceleration. While the orbit feedback performed well since its inception, corrector current transients and feedback-induced beam oscillations were observed during the polarized proton program in 2015. In this paper, wemore » present the feedback algorithm, the optimization of the algorithm for various lattices and the solution adopted to mitigate the observed current transients during beam acceleration.« less

  15. Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul

    2007-06-01

    Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.

  16. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  17. Detecting chaos, determining the dimensions of tori and predicting slow diffusion in Fermi-Pasta-Ulam lattices by the Generalized Alignment Index method

    NASA Astrophysics Data System (ADS)

    Skokos, C.; Bountis, T.; Antonopoulos, C.

    2008-12-01

    The recently introduced GALI method is used for rapidly detecting chaos, determining the dimensionality of regular motion and predicting slow diffusion in multi-dimensional Hamiltonian systems. We propose an efficient computation of the GALIk indices, which represent volume elements of k randomly chosen deviation vectors from a given orbit, based on the Singular Value Decomposition (SVD) algorithm. We obtain theoretically and verify numerically asymptotic estimates of GALIs long-time behavior in the case of regular orbits lying on low-dimensional tori. The GALIk indices are applied to rapidly detect chaotic oscillations, identify low-dimensional tori of Fermi-Pasta-Ulam (FPU) lattices at low energies and predict weak diffusion away from quasiperiodic motion, long before it is actually observed in the oscillations.

  18. Separability of spatiotemporal spectra of image sequences. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Eckert, Michael P.; Buchsbaum, Gershon; Watson, Andrew B.

    1992-01-01

    The spatiotemporal power spectrum was calculated of 14 image sequences in order to determine the degree to which the spectra are separable in space and time, and to assess the validity of the commonly used exponential correlation model found in the literature. The spectrum was expanded by a Singular Value Decomposition into a sum of separable terms and an index was defined of spatiotemporal separability as the fraction of the signal energy that can be represented by the first (largest) separable term. All spectra were found to be highly separable with an index of separability above 0.98. The power spectra of the sequences were well fit by a separable model. The power spectrum model corresponds to a product of exponential autocorrelation functions separable in space and time.

  19. Multidisciplinary Research Program in Atmospheric Science. [remote sensing

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    A theoretical analysis of the vertical resolving power of the High resolution Infrared Radiation Sounder (HIRS) and the Advanced Meteorological Temperature Sounder (AMTS) is carried out. The infrared transmittance weighting functions and associated radiative transfer kernels are analyzed through singular value decomposition. The AMTS was found to contain several more pieces of independent information than HIRS when the transmittances were considered, but the two instruments appeared to be much more similar when the temperature sensitive radiative transfer kernels were analyzed. The HIRS and AMTS instruments were also subjected to a thorough analysis. It was found that the two instruments should have very similar vertical resolving power below 500 mb but that AMTS should have superior resolving power above 200 mb. In the layer 200 to 500 mb the AMTS showed badly degraded spread function.

  20. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  1. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  2. Understanding Singular Vectors

    ERIC Educational Resources Information Center

    James, David; Botteron, Cynthia

    2013-01-01

    matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…

  3. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  4. Geometry and the onset of rigidity in a disordered network

    NASA Astrophysics Data System (ADS)

    Vermeulen, Mathijs F. J.; Bose, Anwesha; Storm, Cornelis; Ellenbroek, Wouter G.

    2017-11-01

    Disordered spring networks that are undercoordinated may abruptly rigidify when sufficient strain is applied. Since the deformation in response to applied strain does not change the generic quantifiers of network architecture, the number of nodes and the number of bonds between them, this rigidity transition must have a geometric origin. Naive, degree-of-freedom-based mechanical analyses such as the Maxwell-Calladine count or the pebble game algorithm overlook such geometric rigidity transitions and offer no means of predicting or characterizing them. We apply tools that were developed for the topological analysis of zero modes and states of self-stress on regular lattices to two-dimensional random spring networks and demonstrate that the onset of rigidity, at a finite simple shear strain γ★, coincides with the appearance of a single state of self-stress, accompanied by a single floppy mode. The process conserves the topologically invariant difference between the number of zero modes and the number of states of self-stress but imparts a finite shear modulus to the spring network. Beyond the critical shear, the network acquires a highly anisotropic elastic modulus, resisting further deformation most strongly in the direction of the rigidifying shear. We confirm previously reported critical scaling of the corresponding differential shear modulus. In the subcritical regime, a singular value decomposition of the network's compatibility matrix foreshadows the onset of rigidity by way of a continuously vanishing singular value corresponding to the nascent state of self-stress.

  5. Multivalued classical mechanics arising from singularity loops in complex time

    NASA Astrophysics Data System (ADS)

    Koch, Werner; Tannor, David J.

    2018-02-01

    Complex-valued classical trajectories in complex time encounter singular times at which the momentum diverges. A closed time contour around such a singular time may result in final values for q and p that differ from their initial values. In this work, we develop a calculus for determining the exponent and prefactor of the asymptotic time dependence of p from the singularities of the potential as the singularity time is approached. We identify this exponent with the number of singularity loops giving distinct solutions to Hamilton's equations of motion. The theory is illustrated for the Eckart, Coulomb, Morse, and quartic potentials. Collectively, these potentials illustrate a wide variety of situations: poles and essential singularities at finite and infinite coordinate values. We demonstrate quantitative agreement between analytical and numerical exponents and prefactors, as well as the connection between the exponent and the time circuit count. This work provides the theoretical underpinnings for the choice of time contours described in the studies of Doll et al. [J. Chem. Phys. 58(4), 1343-1351 (1973)] and Petersen and Kay [J. Chem. Phys. 141(5), 054114 (2014)]. It also has implications for wavepacket reconstruction from complex classical trajectories when multiple branches of trajectories are involved.

  6. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  7. The effect of receiver coil orientations on the imaging performance of magnetic induction tomography

    NASA Astrophysics Data System (ADS)

    Gürsoy, D.; Scharfetter, H.

    2009-10-01

    Magnetic induction tomography is an imaging modality which aims to reconstruct the conductivity distribution of the human body. It uses magnetic induction to excite the body and an array of sensor coils to detect the perturbations in the magnetic field. Up to now, much effort has been expended with the aim of finding an efficient coil configuration to extend the dynamic range of the measured signal. However, the merits of different sensor orientations on the imaging performance have not been studied in great detail so far. Therefore, the aim of the study is to fill the void of a systematic investigation of coil orientations on the reconstruction quality of the designs. To this end, a number of alternative receiver array designs with different coil orientations were suggested and the evaluations of the designs were performed based on the singular value decomposition. A generalized class of quality measures, the subclasses of which are linked to both the spatial resolution and uncertainty measures, was used to assess the performance on the radial and axial axes of a cylindrical phantom. The detectability of local conductivity perturbations in the phantom was explored using the reconstructed images. It is possible to draw the conclusion that the proper choice of the coil orientations significantly influences the number of usable singular vectors and accordingly the stability of image reconstruction, although the effect of increased stability on the quality of the reconstructed images was not of paramount importance due to the reduced independent information content of the associated singular vectors.

  8. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    PubMed Central

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600

  9. Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.

    PubMed

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  10. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  11. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  12. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    NASA Astrophysics Data System (ADS)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  13. Developing Chemistry and Kinetic Modeling Tools for Low-Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Beckwith, Kris; Davidson, Bradley; Kruger, Scott; Pankin, Alexei; Roark, Christine; Stoltz, Peter

    2015-09-01

    We discuss the use of proper orthogonal decomposition (POD) methods in VSim, a FDTD plasma simulation code capable of both PIC/MCC and fluid modeling. POD methods efficiently generate smooth representations of noisy self-consistent or test-particle PIC data, and are thus advantageous in computing macroscopic fluid quantities from large PIC datasets (e.g. for particle-based closure computations) and in constructing optimal visual representations of the underlying physics. They may also confer performance advantages for massively parallel simulations, due to the significant reduction in dataset sizes conferred by truncated singular-value decompositions of the PIC data. We also demonstrate how complex LTP chemistry scenarios can be modeled in VSim via an interface with MUNCHKIN, a developing standalone python/C++/SQL code that identifies reaction paths for given input species, solves 1D rate equations for the time-dependent chemical evolution of the system, and generates corresponding VSim input blocks with appropriate cross-sections/reaction rates. MUNCHKIN also computes reaction rates from user-specified distribution functions, and conducts principal path analyses to reduce the number of simulated chemical reactions. Supported by U.S. Department of Energy SBIR program, Award DE-SC0009501.

  14. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering

    PubMed Central

    2012-01-01

    Background Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Results Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike sorting algorithms. Conclusions This new software provides neuroscience laboratories with a new tool for fast and robust online classification of single neuron activity. This feature could become crucial in situations when online spike detection from multiple electrodes is paramount, such as in human clinical recordings or in brain-computer interfaces. PMID:22871125

  15. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering.

    PubMed

    Oliynyk, Andriy; Bonifazzi, Claudio; Montani, Fernando; Fadiga, Luciano

    2012-08-08

    Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike sorting algorithms. This new software provides neuroscience laboratories with a new tool for fast and robust online classification of single neuron activity. This feature could become crucial in situations when online spike detection from multiple electrodes is paramount, such as in human clinical recordings or in brain-computer interfaces.

  16. Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data.

    PubMed

    Tomescu, Oana A; Mattanovich, Diethard; Thallinger, Gerhard G

    2014-01-01

    Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study.

  17. Genomic signal processing: from matrix algebra to genetic networks.

    PubMed

    Alter, Orly

    2007-01-01

    DNA microarrays make it possible, for the first time, to record the complete genomic signals that guide the progression of cellular processes. Future discovery in biology and medicine will come from the mathematical modeling of these data, which hold the key to fundamental understanding of life on the molecular level, as well as answers to questions regarding diagnosis, treatment, and drug development. This chapter reviews the first data-driven models that were created from these genome-scale data, through adaptations and generalizations of mathematical frameworks from matrix algebra that have proven successful in describing the physical world, in such diverse areas as mechanics and perception: the singular value decomposition model, the generalized singular value decomposition model comparative model, and the pseudoinverse projection integrative model. These models provide mathematical descriptions of the genetic networks that generate and sense the measured data, where the mathematical variables and operations represent biological reality. The variables, patterns uncovered in the data, correlate with activities of cellular elements such as regulators or transcription factors that drive the measured signals and cellular states where these elements are active. The operations, such as data reconstruction, rotation, and classification in subspaces of selected patterns, simulate experimental observation of only the cellular programs that these patterns represent. These models are illustrated in the analyses of RNA expression data from yeast and human during their cell cycle programs and DNA-binding data from yeast cell cycle transcription factors and replication initiation proteins. Two alternative pictures of RNA expression oscillations during the cell cycle that emerge from these analyses, which parallel well-known designs of physical oscillators, convey the capacity of the models to elucidate the design principles of cellular systems, as well as guide the design of synthetic ones. In these analyses, the power of the models to predict previously unknown biological principles is demonstrated with a prediction of a novel mechanism of regulation that correlates DNA replication initiation with cell cycle-regulated RNA transcription in yeast. These models may become the foundation of a future in which biological systems are modeled as physical systems are today.

  18. Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data

    PubMed Central

    2014-01-01

    Background Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Results Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Conclusion Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study. PMID:25033389

  19. Determination of Rayleigh wave ellipticity using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli Joseph

    We present a single-station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 sec the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 sec, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher ( 2%) and significantly higher (>20%), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e., Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves.

  20. Determination of Rayleigh wave ellipticity across the Earthscope Transportable Array using single-station and array-based processing of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Workman, Eli; Lin, Fan-Chi; Koper, Keith D.

    2017-01-01

    We present a single station method for the determination of Rayleigh wave ellipticity, or Rayleigh wave horizontal to vertical amplitude ratio (H/V) using Frequency Dependent Polarization Analysis (FDPA). This procedure uses singular value decomposition of 3-by-3 spectral covariance matrices over 1-hr time windows to determine properties of the ambient seismic noise field such as particle motion and dominant wave-type. In FPDA, if the noise is mostly dominated by a primary singular value and the phase difference is roughly 90° between the major horizontal axis and the vertical axis of the corresponding singular vector, we infer that Rayleigh waves are dominant and measure an H/V ratio for that hour and frequency bin. We perform this analysis for all available data from the Earthscope Transportable Array between 2004 and 2014. We compare the observed Rayleigh wave H/V ratios with those previously measured by multicomponent, multistation noise cross-correlation (NCC), as well as classical noise spectrum H/V ratio analysis (NSHV). At 8 s the results from all three methods agree, suggesting that the ambient seismic noise field is Rayleigh wave dominated. Between 10 and 30 s, while the general pattern agrees well, the results from FDPA and NSHV are persistently slightly higher (˜2 per cent) and significantly higher (>20 per cent), respectively, than results from the array-based NCC. This is likely caused by contamination from other wave types (i.e. Love waves, body waves, and tilt noise) in the single station methods, but it could also reflect a small, persistent error in NCC. Additionally, we find that the single station method has difficulty retrieving robust Rayleigh wave H/V ratios within major sedimentary basins, such as the Williston Basin and Mississippi Embayment, where the noise field is likely dominated by reverberating Love waves and tilt noise.

  1. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  2. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  3. A parsimonious characterization of change in global age-specific and total fertility rates

    PubMed Central

    2018-01-01

    This study aims to understand trends in global fertility from 1950-2010 though the analysis of age-specific fertility rates. This approach incorporates both the overall level, as when the total fertility rate is modeled, and different patterns of age-specific fertility to examine the relationship between changes in age-specific fertility and fertility decline. Singular value decomposition is used to capture the variation in age-specific fertility curves while reducing the number of dimensions, allowing curves to be described nearly fully with three parameters. Regional patterns and trends over time are evident in parameter values, suggesting this method provides a useful tool for considering fertility decline globally. The second and third parameters were analyzed using model-based clustering to examine patterns of age-specific fertility over time and place; four clusters were obtained. A country’s demographic transition can be traced through time by membership in the different clusters, and regional patterns in the trajectories through time and with fertility decline are identified. PMID:29377899

  4. Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and SVD.

    PubMed

    Bullinaria, John A; Levy, Joseph P

    2012-09-01

    In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.

  5. Imaging of voids by means of a physical-optics-based shape-reconstruction algorithm.

    PubMed

    Liseno, Angelo; Pierri, Rocco

    2004-06-01

    We analyze the performance of a shape-reconstruction algorithm for the retrieval of voids starting from the electromagnetic scattered field. Such an algorithm exploits the physical optics (PO) approximation to obtain a linear unknown-data relationship and performs inversions by means of the singular-value-decomposition approach. In the case of voids, in addition to a geometrical optics reflection, the presence of the lateral wave phenomenon must be considered. We analyze the effect of the presence of lateral waves on the reconstructions. For the sake of shape reconstruction, we can regard the PO algorithm as one of assuming the electric and magnetic field on the illuminated side as constant in amplitude and linear in phase, as far as the dependence on the frequency is concerned. Therefore we analyze how much the lateral wave phenomenon impairs such an assumption, and we show inversions for both one single and two circular voids, for different values of the background permittivity.

  6. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  7. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  8. From Molecules to Cells to Organisms: Understanding Health and Disease with Multidimensional Single-Cell Methods

    NASA Astrophysics Data System (ADS)

    Candia, Julián

    2013-03-01

    The multidimensional nature of many single-cell measurements (e.g. multiple markers measured simultaneously using Fluorescence-Activated Cell Sorting (FACS) technologies) offers unprecedented opportunities to unravel emergent phenomena that are governed by the cooperative action of multiple elements across different scales, from molecules and proteins to cells and organisms. We will discuss an integrated analysis framework to investigate multicolor FACS data from different perspectives: Singular Value Decomposition to achieve an effective dimensional reduction in the data representation, machine learning techniques to separate different patient classes and improve diagnosis, as well as a novel cell-similarity network analysis method to identify cell subpopulations in an unbiased manner. Besides FACS data, this framework is versatile: in this vein, we will demonstrate an application to the multidimensional single-cell shape analysis of healthy and prematurely aged cells.

  9. Rapid determination of particle velocity from space-time images using the Radon transform

    PubMed Central

    Drew, Patrick J.; Blinder, Pablo; Cauwenberghs, Gert; Shih, Andy Y.; Kleinfeld, David

    2016-01-01

    Laser-scanning methods are a means to observe streaming particles, such as the flow of red blood cells in a blood vessel. Typically, particle velocity is extracted from images formed from cyclically repeated line-scan data that is obtained along the center-line of the vessel; motion leads to streaks whose angle is a function of the velocity. Past methods made use of shearing or rotation of the images and a Singular Value Decomposition (SVD) to automatically estimate the average velocity in a temporal window of data. Here we present an alternative method that makes use of the Radon transform to calculate the velocity of streaming particles. We show that this method is over an order of magnitude faster than the SVD-based algorithm and is more robust to noise. PMID:19459038

  10. An FMM-FFT Accelerated SIE Simulator for Analyzing EM Wave Propagation in Mine Environments Loaded With Conductors

    PubMed Central

    Sheng, Weitian; Zhou, Chenming; Liu, Yang; Bagci, Hakan; Michielssen, Eric

    2018-01-01

    A fast and memory efficient three-dimensional full-wave simulator for analyzing electromagnetic (EM) wave propagation in electrically large and realistic mine tunnels/galleries loaded with conductors is proposed. The simulator relies on Muller and combined field surface integral equations (SIEs) to account for scattering from mine walls and conductors, respectively. During the iterative solution of the system of SIEs, the simulator uses a fast multipole method-fast Fourier transform (FMM-FFT) scheme to reduce CPU and memory requirements. The memory requirement is further reduced by compressing large data structures via singular value and Tucker decompositions. The efficiency, accuracy, and real-world applicability of the simulator are demonstrated through characterization of EM wave propagation in electrically large mine tunnels/galleries loaded with conducting cables and mine carts. PMID:29726545

  11. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  12. Low Dimensional Analysis of Wing Surface Morphology in Hummingbird Free Flight

    NASA Astrophysics Data System (ADS)

    Shallcross, Gregory; Ren, Yan; Liu, Geng; Dong, Haibo; Tobalske, Bret

    2015-11-01

    Surface morphing in flapping wings is a hallmark of bird flight. In current work, the role of dynamic wing morphing of a free flying hummingbird is studied in detail. A 3D image-based surface reconstruction method is used to obtain the kinematics and deformation of hummingbird wings from high-quality high-speed videos. The observed wing surface morphing is highly complex and a number of modeling methods including singular value decomposition (SVD) are used to obtain the fundamental kinematical modes with distinct motion features. Their aerodynamic roles are investigated by conducting immersed-boundary-method based flow simulations. The results show that the chord-wise deformation modes play key roles in the attachment of leading-edge vortex, thus improve the performance of the flapping wings. This work is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.

  13. Motions, efforts and actuations in constrained dynamic systems: a multi-link open-chain example

    NASA Astrophysics Data System (ADS)

    Duke Perreira, N.

    1999-08-01

    The effort-motion method, which describes the dynamics of open- and closed-chain topologies of rigid bodies interconnected with revolute and prismatic pairs, is interpreted geometrically. Systems are identified for which the simultaneous control of forces and velocities is desirable, and a representative open-chain system is selected for use in the ensuing analysis. Gauge invariant transformations are used to recast the commonly used kinetic and kinematic equations into a dimensional gauge invariant form. Constraint elimination techniques based on singular value decompositions then recast the invariant equations into orthogonal and reciprocal sets of motion and effort equations written in state variable form. The ideal actuation is found that simultaneously achieves the obtainable portions of the desired constraining efforts and motions. The performance is then evaluated of using the actuation closest to the ideal actuation.

  14. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  15. Using linear algebra for protein structural comparison and classification

    PubMed Central

    2009-01-01

    In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in. PMID:21637532

  16. Using linear algebra for protein structural comparison and classification.

    PubMed

    Gomide, Janaína; Melo-Minardi, Raquel; Dos Santos, Marcos Augusto; Neshich, Goran; Meira, Wagner; Lopes, Júlio César; Santoro, Marcelo

    2009-07-01

    In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in.

  17. Observation of Electronic Excitation Transfer Through Light Harvesting Complex II Using Two-Dimensional Electronic-Vibrational Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Nicholas H. C.; Gruenke, Natalie L.; Oliver, Thomas A. A.

    Light-harvesting complex II (LHCII) serves a central role in light harvesting for oxygenic photosynthesis and is arguably the most important photosynthetic antenna complex. In this article, we present two-dimensional electronic–vibrational (2DEV) spectra of LHCII isolated from spinach, demonstrating the possibility of using this technique to track the transfer of electronic excitation energy between specific pigments within the complex. We assign the spectral bands via comparison with the 2DEV spectra of the isolated chromophores, chlorophyll a and b, and present evidence that excitation energy between the pigments of the complex are observed in these spectra. Lastly, we analyze the essential componentsmore » of the 2DEV spectra using singular value decomposition, which makes it possible to reveal the relaxation pathways within this complex.« less

  18. Joint inversion of fundamental and higher mode Rayleigh waves

    USGS Publications Warehouse

    Luo, Y.-H.; Xia, J.-H.; Liu, J.-P.; Liu, Q.-S.

    2008-01-01

    In this paper, we analyze the characteristics of the phase velocity of fundamental and higher mode Rayleigh waves in a six-layer earth model. The results show that fundamental mode is more sensitive to the shear velocities of shallow layers (< 7 m) and concentrated in a very narrow band (around 18 Hz) while higher modes are more sensitive to the parameters of relatively deeper layers and distributed over a wider frequency band. These properties provide a foundation of using a multi-mode joint inversion to define S-wave velocity. Inversion results of both synthetic data and a real-world example demonstrate that joint inversion with the damped least squares method and the SVD (Singular Value Decomposition) technique to invert Rayleigh waves of fundamental and higher modes can effectively reduce the ambiguity and improve the accuracy of inverted S-wave velocities.

  19. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  20. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  1. A batch sliding window method for local singularity mapping and its application for geochemical anomaly identification

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang

    2016-05-01

    In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.

  2. Scope insensitivity in helping decisions: Is it a matter of culture and values?

    PubMed

    Kogut, Tehila; Slovic, Paul; Västfjäll, Daniel

    2015-12-01

    The singularity effect of identifiable victims refers to people's greater willingness to help a single concrete victim compared with a group of victims experiencing the same need. We present 3 studies exploring values and cultural sources of this effect. In the first study, the singularity effect was found only among Western Israelis and not among Bedouin participants (a more collectivist group). In Study 2, individuals with higher collectivist values were more likely to contribute to a group of victims. Finally, the third study demonstrates a more causal relationship between collectivist values and the singularity effect by showing that enhancing people's collectivist values using a priming manipulation produces similar donations to single victims and groups. Moreover, participants' collectivist preferences mediated the interaction between the priming conditions and singularity of the recipient. Implications for several areas of psychology and ways to enhance caring for groups in need are discussed. (c) 2015 APA, all rights reserved).

  3. Predicting breast cancer using an expression values weighted clinical classifier.

    PubMed

    Thomas, Minta; De Brabanter, Kris; Suykens, Johan A K; De Moor, Bart

    2014-12-31

    Clinical data, such as patient history, laboratory analysis, ultrasound parameters-which are the basis of day-to-day clinical decision support-are often used to guide the clinical management of cancer in the presence of microarray data. Several data fusion techniques are available to integrate genomics or proteomics data, but only a few studies have created a single prediction model using both gene expression and clinical data. These studies often remain inconclusive regarding an obtained improvement in prediction performance. To improve clinical management, these data should be fully exploited. This requires efficient algorithms to integrate these data sets and design a final classifier. LS-SVM classifiers and generalized eigenvalue/singular value decompositions are successfully used in many bioinformatics applications for prediction tasks. While bringing up the benefits of these two techniques, we propose a machine learning approach, a weighted LS-SVM classifier to integrate two data sources: microarray and clinical parameters. We compared and evaluated the proposed methods on five breast cancer case studies. Compared to LS-SVM classifier on individual data sets, generalized eigenvalue decomposition (GEVD) and kernel GEVD, the proposed weighted LS-SVM classifier offers good prediction performance, in terms of test area under ROC Curve (AUC), on all breast cancer case studies. Thus a clinical classifier weighted with microarray data set results in significantly improved diagnosis, prognosis and prediction responses to therapy. The proposed model has been shown as a promising mathematical framework in both data fusion and non-linear classification problems.

  4. Simultaneous and independent optical impairments monitoring using singular spectrum analysis of asynchronously sampled signal amplitudes

    NASA Astrophysics Data System (ADS)

    Guesmi, Latifa; Menif, Mourad

    2015-09-01

    Optical performance monitoring (OPM) becomes an inviting topic in high speed optical communication networks. In this paper, a novel technique of OPM based on a new elaborated computation approach of singular spectrum analysis (SSA) for time series prediction is presented. Indeed, various optical impairments among chromatic dispersion (CD), polarization mode dispersion (PMD) and amplified spontaneous emission (ASE) noise are a major factors limiting quality of transmission data in the systems with data rates lager than 40 Gbit/s. This technique proposed an independent and simultaneous multi-impairments monitoring, where we used SSA of time series analysis and forecasting. It has proven their usefulness in the temporal analysis of short and noisy time series in several fields, that it is based on the singular value decomposition (SVD). Also, advanced optical modulation formats (100 Gbit/s non-return-to zero dual-polarization quadrature phase shift keying (NRZ-DP-QPSK) and 160 Gbit/s DP-16 quadrature amplitude modulation (DP-16QAM)) offering high spectral efficiencies have been successfully employed by analyzing their asynchronously sampled amplitude. The simulated results proved that our method is efficient on CD, first-order PMD, Q-factor and OSNR monitoring, which enabled large monitoring ranges, the CD in the range of 170-1700 ps/nm.Km and 170-1110 ps/nm.Km for 100 Gbit/s NRZ-DP-QPSK and 160 Gbit/s DP-16QAM respectively, and also the DGD up to 20 ps is monitored. We could accurately monitor the OSNR in the range of 10-40 dB with monitoring error remains less than 1 dB in the presence of large accumulated CD.

  5. A numerical solution of a singular boundary value problem arising in boundary layer theory.

    PubMed

    Hu, Jiancheng

    2016-01-01

    In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.

  6. A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-08-01

    In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.

  7. Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function.

    PubMed

    Kobayashi, Masaki

    2018-05-01

    There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.

  8. Integrable mappings and the notion of anticonfinement

    NASA Astrophysics Data System (ADS)

    Mase, T.; Willox, R.; Ramani, A.; Grammaticos, B.

    2018-06-01

    We examine the notion of anticonfinement and the role it has to play in the singularity analysis of discrete systems. A singularity is said to be anticonfined if singular values continue to arise indefinitely for the forward and backward iterations of a mapping, with only a finite number of iterates taking regular values in between. We show through several concrete examples that the behaviour of some anticonfined singularities is strongly related to the integrability properties of the discrete mappings in which they arise, and we explain how to use this information to decide on the integrability or non-integrability of the mapping.

  9. Automated Identification of MHD Mode Bifurcation and Locking in Tokamaks

    NASA Astrophysics Data System (ADS)

    Riquezes, J. D.; Sabbagh, S. A.; Park, Y. S.; Bell, R. E.; Morton, L. A.

    2017-10-01

    Disruption avoidance is critical in reactor-scale tokamaks such as ITER to maintain steady plasma operation and avoid damage to device components. A key physical event chain that leads to disruptions is the appearance of rotating MHD modes, their slowing by resonant field drag mechanisms, and their locking. An algorithm has been developed that automatically detects bifurcation of the mode toroidal rotation frequency due to loss of torque balance under resonant braking, and mode locking for a set of shots using spectral decomposition. The present research examines data from NSTX, NSTX-U and KSTAR plasmas which differ significantly in aspect ratio (ranging from A = 1.3 - 3.5). The research aims to examine and compare the effectiveness of different algorithms for toroidal mode number discrimination, such as phase matching and singular value decomposition approaches, and to examine potential differences related to machine aspect ratio (e.g. mode eigenfunction shape variation). Simple theoretical models will be compared to the dynamics found. Main goals are to detect or potentially forecast the event chain early during a discharge. This would serve as a cue to engage active mode control or a controlled plasma shutdown. Supported by US DOE Contracts DE-SC0016614 and DE-AC02-09CH11466.

  10. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  11. Stabilizing bidirectional associative memory with Principles in Independent Component Analysis and Null Space (PICANS)

    NASA Astrophysics Data System (ADS)

    LaRue, James P.; Luzanov, Yuriy

    2013-05-01

    A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer association while the second application is modeled after the several hierarchal associations of ventral pathways. The first application will detail the way in which we manage the associations in terms of matrices. The second application will take what we have learned from the first example and apply it to a cascade of a convolutional neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual systems.

  12. Complex numbers in chemometrics: examples from multivariate impedance measurements on lipid monolayers.

    PubMed

    Geladi, Paul; Nelson, Andrew; Lindholm-Sethson, Britta

    2007-07-09

    Electrical impedance gives multivariate complex number data as results. Two examples of multivariate electrical impedance data measured on lipid monolayers in different solutions give rise to matrices (16x50 and 38x50) of complex numbers. Multivariate data analysis by principal component analysis (PCA) or singular value decomposition (SVD) can be used for complex data and the necessary equations are given. The scores and loadings obtained are vectors of complex numbers. It is shown that the complex number PCA and SVD are better at concentrating information in a few components than the naïve juxtaposition method and that Argand diagrams can replace score and loading plots. Different concentrations of Magainin and Gramicidin A give different responses and also the role of the electrolyte medium can be studied. An interaction of Gramicidin A in the solution with the monolayer over time can be observed.

  13. iQIST v0.7: An open source continuous-time quantum Monte Carlo impurity solver toolkit

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2017-12-01

    In this paper, we present a new version of the iQIST software package, which is capable of solving various quantum impurity models by using the hybridization expansion (or strong coupling expansion) continuous-time quantum Monte Carlo algorithm. In the revised version, the software architecture is completely redesigned. New basis (intermediate representation or singular value decomposition representation) for the single-particle and two-particle Green's functions is introduced. A lot of useful physical observables are added, such as the charge susceptibility, fidelity susceptibility, Binder cumulant, and autocorrelation time. Especially, we optimize measurement for the two-particle Green's functions. Both the particle-hole and particle-particle channels are supported. In addition, the block structure of the two-particle Green's functions is exploited to accelerate the calculation. Finally, we fix some known bugs and limitations. The computational efficiency of the code is greatly enhanced.

  14. RF tomography of metallic objects in free space: preliminary results

    NASA Astrophysics Data System (ADS)

    Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher

    2015-05-01

    RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.

  15. In-flight alignment using H ∞ filter for strapdown INS on aircraft.

    PubMed

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.

  16. Eigenspace-based fuzzy c-means for sensing trending topics in Twitter

    NASA Astrophysics Data System (ADS)

    Muliawati, T.; Murfi, H.

    2017-07-01

    As the information and communication technology are developed, the fulfillment of information can be obtained through social media, like Twitter. The enormous number of internet users has triggered fast and large data flow, thus making the manual analysis is difficult or even impossible. An automated methods for data analysis is needed, one of which is the topic detection and tracking. An alternative method other than latent Dirichlet allocation (LDA) is a soft clustering approach using Fuzzy C-Means (FCM). FCM meets the assumption that a document may consist of several topics. However, FCM works well in low-dimensional data but fails in high-dimensional data. Therefore, we propose an approach where FCM works on low-dimensional data by reducing the data using singular value decomposition (SVD). Our simulations show that this approach gives better accuracies in term of topic recall than LDA for sensing trending topic in Twitter about an event.

  17. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  18. Steganography in arrhythmic electrocardiogram signal.

    PubMed

    Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S

    2015-08-01

    Security and privacy of patient data is a vital requirement during exchange/storage of medical information over communication network. Steganography method hides patient data into a cover signal to prevent unauthenticated accesses during data transfer. This study evaluates the performance of ECG steganography to ensure secured transmission of patient data where an abnormal ECG signal is used as cover signal. The novelty of this work is to hide patient data into two dimensional matrix of an abnormal ECG signal using Discrete Wavelet Transform and Singular Value Decomposition based steganography method. A 2D ECG is constructed according to Tompkins QRS detection algorithm. The missed R peaks are computed using RR interval during 2D conversion. The abnormal ECG signals are obtained from the MIT-BIH arrhythmia database. Metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference, Kullback-Leibler distance and Bit Error Rate are used to evaluate the performance of the proposed approach.

  19. Validation of PEP-II Resonantly Excited Turn-by-Turn BPM Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Yiton T.; Cai, Yunhai; Colocho, William.

    2007-06-28

    For optics measurement and modeling of the PEP-II electron (HER) and position (LER) storage rings, we have been doing well with MIA [1] which requires analyzing turn-by-turn Beam Position Monitor (BPM) data that are resonantly excited at the horizontal, vertical, and longitudinal tunes. However, in anticipation that certain BPM buttons and even pins in the PEP-II IR region would be missing for the run starting in January 2007, we had been developing a data validation process to reduce the effect due to the reduced BPM data accuracy on PEP-II optics measurement and modeling. Besides the routine process for ranking BPMmore » noise level through data correlation among BPMs with a singular-value decomposition (SVD), we could also check BPM data symplecticity by comparing the invariant ratios. Results from PEP-II measurement will be presented.« less

  20. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  1. Time evolution of two holes in t - J chains with anisotropic couplings

    NASA Astrophysics Data System (ADS)

    Manmana, Salvatore R.; Thyen, Holger; Köhler, Thomas; Kramer, Stephan C.

    Using time-dependent Matrix Product State (MPS) methods we study the real-time evolution of hole-excitations in t-J chains close to filling n = 1 . The dynamics in 'standard' t - J chains with SU(2) invariant spin couplings is compared to the one when introducing anisotropic, XXZ-type spin interactions as realizable, e.g., by ultracold polar molecules on optical lattices. The simulations are performed with MPS implementations based on the usual singular value decompositions (SVD) as well as ones using the adaptive cross approximation (ACA) instead. The ACA can be seen as an iterative approach to SVD which is often used, e.g., in the context of finite-element-methods, leading to a substantial speedup. A comparison of the performance of both algorithms in the MPS context is discussed. Financial support via DFG through CRC 1073 (''Atomic scale control of energy conversion''), project B03 is gratefully acknowledged.

  2. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2018-02-23

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.

  3. SVD-aided pseudo principal-component analysis: A new method to speed up and improve determination of the optimum kinetic model from time-resolved data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oang, Key Young; Yang, Cheolhee; Muniyappan, Srinivasan

    Determination of the optimum kinetic model is an essential prerequisite for characterizing dynamics and mechanism of a reaction. Here, we propose a simple method, termed as singular value decomposition-aided pseudo principal-component analysis (SAPPA), to facilitate determination of the optimum kinetic model from time-resolved data by bypassing any need to examine candidate kinetic models. We demonstrate the wide applicability of SAPPA by examining three different sets of experimental time-resolved data and show that SAPPA can efficiently determine the optimum kinetic model. In addition, the results of SAPPA for both time-resolved X-ray solution scattering (TRXSS) and transient absorption (TA) data of themore » same protein reveal that global structural changes of protein, which is probed by TRXSS, may occur more slowly than local structural changes around the chromophore, which is probed by TA spectroscopy.« less

  4. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  5. The stratospheric QBO signal in the NCEP reanalysis, 1958-2001

    NASA Astrophysics Data System (ADS)

    Ribera, Pedro; Gallego, David; Peña-Ortiz, Cristina; Gimeno, Luis; Garcia-Herrera, Ricardo; Hernandez, Emiliano; Calvo, Natalia

    2003-07-01

    The spatiotemporal evolution of the zonal wind in the stratosphere is analyzed based on the use of the NCEP reanalysis (1958-2001). MultiTaper Method-Singular Value Decomposition (MTM-SVD), a frequency-domain analysis method, is applied to isolate significant spatially-coherent variability with narrowband oscillatory character. A quasibiennial oscillation is detected as the most intense coherent signal in the stratosphere, the signal being less intense in the lower levels. There is a clear downward propagation of the signal with time at low latitudes, not evident at mid and high latitudes. There are differences in the behavior of the signal over both hemispheres, being much weaker over the SH. In the NH an anomaly in the zonal wind field, in phase with the equatorial signal, is detected at approximately 60°N. Two different areas at subtropical latitudes are detected to be characterized by wind anomalies opposed to that of the equator.

  6. A Feasibility Study on a Parallel Mechanism for Examining the Space Shuttle Orbiter Payload Bay Radiators

    NASA Technical Reports Server (NTRS)

    Roberts, Rodney G.; LopezdelCastillo, Eduardo

    1996-01-01

    The goal of the project was to develop the necessary analysis tools for a feasibility study of a cable suspended robot system for examining the space shuttle orbiter payload bay radiators These tools were developed to address design issues such as workspace size, tension requirements on the cable, the necessary accuracy and resolution requirements and the stiffness and movement requirements of the system. This report describes the mathematical models for studying the inverse kinematics, statics, and stiffness of the robot. Each model is described by a matrix. The manipulator Jacobian was also related to the stiffness matrix, which characterized the stiffness of the system. Analysis tools were then developed based on the singular value decomposition (SVD) of the corresponding matrices. It was demonstrated how the SVD can be used to quantify the robot's performance and to provide insight into different design issues.

  7. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-10-16

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  8. MGRA: Motion Gesture Recognition via Accelerometer.

    PubMed

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  9. Comparative performance evaluation of transform coding in image pre-processing

    NASA Astrophysics Data System (ADS)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  10. A new method to extract modal parameters using output-only responses

    NASA Astrophysics Data System (ADS)

    Kim, Byeong Hwa; Stubbs, Norris; Park, Taehyo

    2005-04-01

    This work proposes a new output-only modal analysis method to extract mode shapes and natural frequencies of a structure. The proposed method is based on an approach with a single-degree-of-freedom in the time domain. For a set of given mode-isolated signals, the un-damped mode shapes are extracted utilizing the singular value decomposition of the output energy correlation matrix with respect to sensor locations. The natural frequencies are extracted from a noise-free signal that is projected on the estimated modal basis. The proposed method is particularly efficient when a high resolution of mode shape is essential. The accuracy of the method is numerically verified using a set of time histories that are simulated using a finite-element method. The feasibility and practicality of the method are verified using experimental data collected at the newly constructed King Storm Water Bridge in California, United States.

  11. A new feature constituting approach to detection of vocal fold pathology

    NASA Astrophysics Data System (ADS)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  12. Slow Orbit Feedback at the ALS Using Matlab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portmann, G.

    1999-03-25

    The third generation Advanced Light Source (ALS) produces extremely bright and finely focused photon beams using undulatory, wigglers, and bend magnets. In order to position the photon beams accurately, a slow global orbit feedback system has been developed. The dominant causes of orbit motion at the ALS are temperature variation and insertion device motion. This type of motion can be removed using slow global orbit feedback with a data rate of a few Hertz. The remaining orbit motion in the ALS is only 1-3 micron rms. Slow orbit feedback does not require high computational throughput. At the ALS, the globalmore » orbit feedback algorithm, based on the singular valued decomposition method, is coded in MATLAB and runs on a control room workstation. Using the MATLAB environment to develop, test, and run the storage ring control algorithms has proven to be a fast and efficient way to operate the ALS.« less

  13. SandiaMRCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-01-05

    SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less

  14. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  15. A nonlinear quality-related fault detection approach based on modified kernel partial least squares.

    PubMed

    Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen

    2017-01-01

    In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Actuation for simultaneous motions and constraining efforts: an open chain example

    NASA Astrophysics Data System (ADS)

    Perreira, N. Duke

    1997-06-01

    A brief discussion on systems where simultaneous control of forces and velocities are desirable is given and an example linkage with revolute and prismatic joint is selected for further analysis. The Newton-Euler approach for dynamic system analysis is applied to the example to provide a basis of comparison. Gauge invariant transformations are used to convert the dynamic equations into invariant form suitable for use in a new dynamic system analysis method known as the motion-effort approach. This approach uses constraint elimination techniques based on singular value decompositions to recast the invariant form of dynamic system equations into orthogonal sets of motion and effort equations. Desired motions and constraining efforts are partitioned into ideally obtainable and unobtainable portions which are then used to determine the required actuation. The method is applied to the example system and an analytic estimate to its success is made.

  17. Compound matrices

    NASA Astrophysics Data System (ADS)

    Kravvaritis, Christos; Mitrouli, Marilena

    2009-02-01

    This paper studies the possibility to calculate efficiently compounds of real matrices which have a special form or structure. The usefulness of such an effort lies in the fact that the computation of compound matrices, which is generally noneffective due to its high complexity, is encountered in several applications. A new approach for computing the Singular Value Decompositions (SVD's) of the compounds of a matrix is proposed by establishing the equality (up to a permutation) between the compounds of the SVD of a matrix and the SVD's of the compounds of the matrix. The superiority of the new idea over the standard method is demonstrated. Similar approaches with some limitations can be adopted for other matrix factorizations, too. Furthermore, formulas for the n - 1 compounds of Hadamard matrices are derived, which dodge the strenuous computations of the respective numerous large determinants. Finally, a combinatorial counting technique for finding the compounds of diagonal matrices is illustrated.

  18. Evaluation of a Singular Value Decomposition Approach for Impact Dynamic Data Correlation

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Lyle, Karen H.; Lessard, Wendy B.

    2003-01-01

    Impact dynamic tests are used in the automobile and aircraft industries to assess survivability of occupants during crash, to assert adequacy of the design, and to gain federal certification. Although there is no substitute for experimental tests, analytical models are often developed and used to study alternate test conditions, to conduct trade-off studies, and to improve designs. To validate results from analytical predictions, test and analysis results must be compared to determine the model adequacy. The mathematical approach evaluated in this paper decomposes observed time responses into dominant deformation shapes and their corresponding contribution to the measured response. To correlate results, orthogonality of test and analysis shapes is used as a criterion. Data from an impact test of a composite fuselage is used and compared to finite element predictions. In this example, the impact response was decomposed into multiple shapes but only two dominant shapes explained over 85% of the measured response

  19. The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Linglong; Li, Changkai; Wang, Yueqin

    2018-04-01

    SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.

  20. The feature extraction of "cat-eye" targets based on bi-spectrum

    NASA Astrophysics Data System (ADS)

    Zhang, Tinghua; Fan, Guihua; Sun, Huayan

    2016-10-01

    In order to resolve the difficult problem of detection and identification of optical targets in complex background or in long-distance transmission, this paper mainly study the range profiles of "cat-eye" targets using bi-spectrum. For the problems of laser echo signal attenuation serious and low Signal-Noise Ratio (SNR), the multi-pulse laser signal echo signal detection algorithm which is based on high-order cumulant, filter processing and the accumulation of multi-pulse is proposed. This could improve the detection range effectively. In order to extract the stable characteristics of the one-dimensional range profile coming from the cat-eye targets, a method is proposed which extracts the bi-spectrum feature, and uses the singular value decomposition to simplify the calculation. Then, by extracting data samples of different distance, type and incidence angle, verify the stability of the eigenvector and effectiveness extracted by bi-spectrum.

  1. A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods

    NASA Astrophysics Data System (ADS)

    Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.

    2001-01-01

    In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.

  2. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    PubMed

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  3. X-ray edge singularity in resonant inelastic x-ray scattering (RIXS)

    NASA Astrophysics Data System (ADS)

    Markiewicz, Robert; Rehr, John; Bansil, Arun

    2013-03-01

    We develop a lattice model based on the theory of Mahan, Noziéres, and de Dominicis for x-ray absorption to explore the effect of the core hole on the RIXS cross section. The dominant part of the spectrum can be described in terms of the dynamic structure function S (q , ω) dressed by matrix element effects, but there is also a weak background associated with multi-electron-hole pair excitations. The model reproduces the decomposition of the RIXS spectrum into well- and poorly-screened components. An edge singularity arises at the threshold of both components. Fairly large lattice sizes are required to describe the continuum limit. Supported by DOE Grant DE-FG02-07ER46352 and facilitated by the DOE CMCSN, under grant number DE-SC0007091.

  4. Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1993-01-01

    High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.

  5. A study of the control problem of the shoot side environment delivery system of a closed crop growth research chamber

    NASA Technical Reports Server (NTRS)

    Blackwell, C. C.; Blackwell, A. L.

    1992-01-01

    The details of our initial study of the control problem of the crop shoot environment of a hypothetical closed crop growth research chamber (CGRC) are presented in this report. The configuration of the CGRC is hypothetical because neither a physical subject nor a design existed at the time the study began, a circumstance which is typical of large scale systems control studies. The basis of the control study is a mathematical model which was judged to adequately mimic the relevant dynamics of the system components considered necessary to provide acceptable realism in the representation. Control of pressure, temperature, and flow rate of the crop shoot environment, along with its oxygen, carbon dioxide, and water concentration is addressed. To account for mass exchange, the group of plants is represented in the model by a source of oxygen, a source of water vapor, and a sink for carbon dioxide. In terms of the thermal energy exchange, the group of plants is represented by a surface with an appropriate temperature. Most of the primitive equations about an experimental operating condition and a state variable representation which was extracted from the linearized equations are presented. Next, we present the results of a real Jordan decomposition and the repositioning of an undesirable eigenvalue via full state feedback. The state variable representation of the modeling system is of the nineteenth order and reflects the eleven control variables and eight system disturbances. Five real eigenvalues are very near zero, with one at zero, three having small magnitude positive values, and one having a small magnitude negative value. A Singular Value Decomposition analysis indicates that these non-zero eigenvalues are not results of numerical error.

  6. Analysis of a 12-Hour Artifact in LF Oscillations of the Magnetic Field of Sunspots According to SDO/HMI Data

    NASA Astrophysics Data System (ADS)

    Efremov, V. I.; Parfinenko, L. D.; Solov'ev, A. A.

    2017-12-01

    The properties of the 12-h artifact in the data of the SDO/HMI instrument (Helioseismic and Magnetic Imager) caused by the nonzero radial velocity of the station relative to the Sun are investigated. The study has been carried out with respect to long-period oscillations of the magnetic field of sunspots for different station positions in the Earth's orbit by the alternative spectral method of singular decomposition of the signal CaterPillarSSA. Features of artifact filtering, both in special positions of the station (at the points of aphelion and perihelion) and at arbitrarily selected orbital points, are considered. It is shown that the 12-h artifact mode can be completely filtered from the time series of the observed variable, not only at these two orbital points (because of the symmetry of the station's radial velocity with respect to the zero mean here) but also at any others. It is shown that only a 12-h mode is physically justified, while the 24-h harmonic appears only as an artifact in the Fourier decomposition of the amplitude-modulated signal. It is emphasized that the values of the magnetic field measured with SDO/HMI are sensitive only to the station's radial velocity absolute values with respect to the Sun and do not depend on its direction. It has been noted that the periods of sunspot oscillation as a whole obtained from SDO/HMI data after orbital artifact filtration fit well into the dependence diagram of the period of sunspot oscillations on the value of its magnetic field strength constructed earlier by SOHO/MDIdata.

  7. Empirical seasonal forecasts of the NAO

    NASA Astrophysics Data System (ADS)

    Sanchezgomez, E.; Ortizbevia, M.

    2003-04-01

    We present here seasonal forecasts of the North Atlantic Oscillation (NAO) issued from ocean predictors with an empirical procedure. The Singular Values Decomposition (SVD) of the cross-correlation matrix between predictor and predictand fields at the lag used for the forecast lead is at the core of the empirical model. The main predictor field are sea surface temperature anomalies, although sea ice cover anomalies are also used. Forecasts are issued in probabilistic form. The model is an improvement over a previous version (1), where Sea Level Pressure Anomalies were first forecast, and the NAO Index built from this forecast field. Both correlation skill between forecast and observed field, and number of forecasts that hit the correct NAO sign, are used to assess the forecast performance , usually above those values found in the case of forecasts issued assuming persistence. For certain seasons and/or leads, values of the skill are above the .7 usefulness treshold. References (1) SanchezGomez, E. and Ortiz Bevia M., 2002, Estimacion de la evolucion pluviometrica de la Espana Seca atendiendo a diversos pronosticos empiricos de la NAO, in 'El Agua y el Clima', Publicaciones de la AEC, Serie A, N 3, pp 63-73, Palma de Mallorca, Spain

  8. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Application of least-squares fitting of ellipse and hyperbola for two dimensional data

    NASA Astrophysics Data System (ADS)

    Lawiyuniarti, M. P.; Rahmadiantri, E.; Alamsyah, I. M.; Rachmaputri, G.

    2018-01-01

    Application of the least-square method of ellipse and hyperbola for two-dimensional data has been applied to analyze the spatial continuity of coal deposits in the mining field, by using the fitting method introduced by Fitzgibbon, Pilu, and Fisher in 1996. This method uses 4{a_0}{a_2} - a_12 = 1 as a constrain function. Meanwhile, in 1994, Gander, Golub and Strebel have introduced ellipse and hyperbola fitting methods using the singular value decomposition approach. This SVD approach can be generalized into a three-dimensional fitting. In this research we, will discuss about those two fitting methods and apply it to four data content of coal that is in the form of ash, calorific value, sulfur and thickness of seam so as to produce form of ellipse or hyperbola. In addition, we compute the error difference resulting from each method and from that calculation, we conclude that although the errors are not much different, the error of the method introduced by Fitzgibbon et al is smaller than the fitting method that introduced by Golub et al.

  10. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  11. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  12. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  13. A numerical method of detecting singularity

    NASA Technical Reports Server (NTRS)

    Laporte, M.; Vignes, J.

    1978-01-01

    A numerical method is reported which determines a value C for the degree of conditioning of a matrix. This value is C = 0 for a singular matrix and has progressively larger values for matrices which are increasingly well-conditioned. This value is C sub = C max sub max (C defined by the precision of the computer) when the matrix is perfectly well conditioned.

  14. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  15. Recovery of singularities from a backscattering Born approximation for a biharmonic operator in 3D

    NASA Astrophysics Data System (ADS)

    Tyni, Teemu

    2018-04-01

    We consider a backscattering Born approximation for a perturbed biharmonic operator in three space dimensions. Previous results on this approach for biharmonic operator use the fact that the coefficients are real-valued to obtain the reconstruction of singularities in the coefficients. In this text we drop the assumption about real-valued coefficients and also establish the recovery of singularities for complex coefficients. The proof uses mapping properties of the Radon transform.

  16. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  17. Short time propagation of a singular wave function: Some surprising results

    NASA Astrophysics Data System (ADS)

    Marchewka, A.; Granot, E.; Schuss, Z.

    2007-08-01

    The Schrödinger evolution of an initially singular wave function was investigated. First it was shown that a wide range of physical problems can be described by initially singular wave function. Then it was demonstrated that outside the support of the initial wave function the time evolution is governed to leading order by the values of the wave function and its derivatives at the singular points. Short-time universality appears where it depends only on a single parameter—the value at the singular point (not even on its derivatives). It was also demonstrated that the short-time evolution in the presence of an absorptive potential is different than in the presence of a nonabsorptive one. Therefore, this dynamics can be harnessed to the determination whether a potential is absorptive or not simply by measuring only the transmitted particles density.

  18. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  19. Systematic Constraint Selection Strategy for Rate-Controlled Constrained-Equilibrium Modeling of Complex Nonequilibrium Chemical Kinetics

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad

    2018-04-01

    Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.

  20. Inverse electrocardiographic transformations: dependence on the number of epicardial regions and body surface data points.

    PubMed

    Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D

    1994-04-01

    The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.

  1. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  2. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  3. Polarization singularity indices in Gaussian laser beams

    NASA Astrophysics Data System (ADS)

    Freund, Isaac

    2002-01-01

    Two types of point singularities in the polarization of a paraxial Gaussian laser beam are discussed in detail. V-points, which are vector point singularities where the direction of the electric vector of a linearly polarized field becomes undefined, and C-points, which are elliptic point singularities where the ellipse orientations of elliptically polarized fields become undefined. Conventionally, V-points are characterized by the conserved integer valued Poincaré-Hopf index η, with generic value η=±1, while C-points are characterized by the conserved half-integer singularity index IC, with generic value IC=±1/2. Simple algorithms are given for generating V-points with arbitrary positive or negative integer indices, including zero, at arbitrary locations, and C-points with arbitrary positive or negative half-integer or integer indices, including zero, at arbitrary locations. Algorithms are also given for generating continuous lines of these singularities in the plane, V-lines and C-lines. V-points and C-points may be transformed one into another. A topological index based on directly measurable Stokes parameters is used to discuss this transformation. The evolution under propagation of V-points and C-points initially embedded in the beam waist is studied, as is the evolution of V-dipoles and C-dipoles.

  4. Classical stability of sudden and big rip singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrow, John D.; Lip, Sean Z. W.

    2009-08-15

    We introduce a general characterization of sudden cosmological singularities and investigate the classical stability of homogeneous and isotropic cosmological solutions of all curvatures containing these singularities to small scalar, vector, and tensor perturbations using gauge-invariant perturbation theory. We establish that sudden singularities at which the scale factor, expansion rate, and density are finite are stable except for a set of special parameter values. We also apply our analysis to the stability of Big Rip singularities and find the conditions for their stability against small scalar, vector, and tensor perturbations.

  5. Information hiding techniques for infrared images: exploring the state-of-the art and challenges

    NASA Astrophysics Data System (ADS)

    Pomponiu, Victor; Cavagnino, Davide; Botta, Marco; Nejati, Hossein

    2015-10-01

    The proliferation of Infrared technology and imaging systems enables a different perspective to tackle many computer vision problems in defense and security applications. Infrared images are widely used by the law enforcement, Homeland Security and military organizations to achieve a significant advantage or situational awareness, and thus is vital to protect these data against malicious attacks. Concurrently, sophisticated malware are developed which are able to disrupt the security and integrity of these digital media. For instance, illegal distribution and manipulation are possible malicious attacks to the digital objects. In this paper we explore the use of a new layer of defense for the integrity of the infrared images through the aid of information hiding techniques such as watermarking. In this context, we analyze the efficiency of several optimal decoding schemes for the watermark inserted into the Singular Value Decomposition (SVD) domain of the IR images using an additive spread spectrum (SS) embedding framework. In order to use the singular values (SVs) of the IR images with the SS embedding we adopt several restrictions that ensure that the values of the SVs will maintain their statistics. For both the optimal maximum likelihood decoder and sub-optimal decoders we assume that the PDF of SVs can be modeled by the Weibull distribution. Furthermore, we investigate the challenges involved in protecting and assuring the integrity of IR images such as data complexity and the error probability behavior, i.e., the probability of detection and the probability of false detection, for the applied optimal decoders. By taking into account the efficiency and the necessary auxiliary information for decoding the watermark, we discuss the suitable decoder for various operating situations. Experimental results are carried out on a large dataset of IR images to show the imperceptibility and efficiency of the proposed scheme against various attack scenarios.

  6. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  7. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  8. Scattering of surface water waves involving semi-infinite floating elastic plates on water of finite depth

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Aloknath; Mohapatra, Smrutiranjan

    2013-09-01

    Two problems of scattering of surface water waves involving a semi-infinite elastic plate and a pair of semi-infinite elastic plates, separated by a gap of finite width, floating horizontally on water of finite depth, are investigated in the present work for a two-dimensional time-harmonic case. Within the frame of linear water wave theory, the solutions of the two boundary value problems under consideration have been represented in the forms of eigenfunction expansions. Approximate values of the reflection and transmission coefficients are obtained by solving an over-determined system of linear algebraic equations in each problem. In both the problems, the method of least squares as well as the singular value decomposition have been employed and tables of numerical values of the reflection and transmission coefficients are presented for specific choices of the parameters for modelling the elastic plates. Our main aim is to check the energy balance relation in each problem which plays a very important role in the present approach of solutions of mixed boundary value problems involving Laplace equations. The main advantage of the present approach of solutions is that the results for the values of reflection and transmission coefficients obtained by using both the methods are found to satisfy the energy-balance relations associated with the respective scattering problems under consideration. The absolute values of the reflection and transmission coefficients are presented graphically against different values of the wave numbers.

  9. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  10. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    PubMed

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  12. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  13. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  14. Sampling considerations for modal analysis with damping

    NASA Astrophysics Data System (ADS)

    Park, Jae Young; Wakin, Michael B.; Gilbert, Anna C.

    2015-03-01

    Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost. However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit. In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique{also known as the Proper Orthogonal Decomposition (POD){can correctly estimate the mode shapes of the structure. Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure's mode shapes to a desired level of accuracy. In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping. In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure's mode shapes. We support our discussion with new analytical insight and experimental demonstrations. In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure's mode shapes can be estimated.

  15. Semiclassical analysis of spectral singularities and their applications in optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mostafazadeh, Ali

    2011-08-15

    Motivated by possible applications of spectral singularities in optics, we develop a semiclassical method of computing spectral singularities. We use this method to examine the spectral singularities of a planar slab gain medium whose gain coefficient varies due to the exponential decay of the intensity of the pumping beam inside the medium. For both singly and doublypumped samples, we obtain universal upper bounds on the decay constant beyond which no lasing occurs. Furthermore, we show that the dependence of the wavelength of the spectral singularities on the value of the decay constant is extremely mild. This is an indication ofmore » the stability of optical spectral singularities.« less

  16. Optical spectral singularities as threshold resonances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mostafazadeh, Ali

    2011-04-15

    Spectral singularities are among generic mathematical features of complex scattering potentials. Physically they correspond to scattering states that behave like zero-width resonances. For a simple optical system, we show that a spectral singularity appears whenever the gain coefficient coincides with its threshold value and other parameters of the system are selected properly. We explore a concrete realization of spectral singularities for a typical semiconductor gain medium and propose a method of constructing a tunable laser that operates at threshold gain.

  17. Sensitivity analysis of automatic flight control systems using singular value concepts

    NASA Technical Reports Server (NTRS)

    Herrera-Vaillard, A.; Paduano, J.; Downing, D.

    1985-01-01

    A sensitivity analysis is presented that can be used to judge the impact of vehicle dynamic model variations on the relative stability of multivariable continuous closed-loop control systems. The sensitivity analysis uses and extends the singular-value concept by developing expressions for the gradients of the singular value with respect to variations in the vehicle dynamic model and the controller design. Combined with a priori estimates of the accuracy of the model, the gradients are used to identify the elements in the vehicle dynamic model and controller that could severely impact the system's relative stability. The technique is demonstrated for a yaw/roll damper stability augmentation designed for a business jet.

  18. Beyond singular values and loop shapes

    NASA Technical Reports Server (NTRS)

    Stein, G.

    1985-01-01

    The status of singular value loop-shaping as a design paradigm for multivariable feedback systems is reviewed. It shows that this paradigm is an effective design tool whenever the problem specifications are spacially round. The tool can be arbitrarily conservative, however, when they are not. This happens because singular value conditions for robust performance are not tight (necessary and sufficient) and can severely overstate actual requirements. An alternate paradign is discussed which overcomes these limitations. The alternative includes a more general problem formulation, a new matrix function mu, and tight conditions for both robust stability and robust performance. The state of the art currently permits analysis of feedback systems within this new paradigm. Synthesis remains a subject of research.

  19. Modern CACSD using the Robust-Control Toolbox

    NASA Technical Reports Server (NTRS)

    Chiang, Richard Y.; Safonov, Michael G.

    1989-01-01

    The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.

  20. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  1. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  2. Singular instantons in Eddington-inspired-Born-Infeld gravity

    DOE PAGES

    Arroja, Frederico; Chen, Che -Yu; Chen, Pisin; ...

    2017-03-23

    In this study, we investigate O(4)-symmetric instantons within the Eddington-inspired-Born-Infeld gravity theory (EiBI) . We discuss the regular Hawking-Moss instanton and find that the tunneling rate reduces to the General Relativity (GR) value, even though the action value is different by a constant. We give a thorough analysis of the singular Vilenkin instanton and the Hawking-Turok instanton with a quadratic scalar field potential in the EiBI theory. In both cases, we find that the singularity can be avoided in the sense that the physical metric, its scalar curvature and the scalar field are regular under some parameter restrictions, but theremore » is a curvature singularity of the auxiliary metric compatible with the connection. We find that the on-shell action is finite and the probability does not reduce to its GR value. We also find that the Vilenkin instanton in the EiBI theory would still cause the instability of the Minkowski space, similar to that in GR, and this is observationally inconsistent. This result suggests that the singularity of the auxiliary metric may be problematic at the quantum level and that these instantons should be excluded from the path integral.« less

  3. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  4. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.

  5. A parallel algorithm for nonlinear convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1990-01-01

    A parallel algorithm for the efficient solution of nonlinear time-dependent convection-diffusion equations with small parameter on the diffusion term is presented. The method is based on a physically motivated domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. The method is suitable for the solution of problems arising in the simulation of fluid dynamics. Experimental results for a nonlinear equation in two-dimensions are presented.

  6. Joint spatiotemporal variability of global sea surface temperatures and global Palmer drought severity index values

    USGS Publications Warehouse

    Apipattanavis, S.; McCabe, G.J.; Rajagopalan, B.; Gangopadhyay, S.

    2009-01-01

    Dominant modes of individual and joint variability in global sea surface temperatures (SST) and global Palmer drought severity index (PDSI) values for the twentieth century are identified through a multivariate frequency domain singular value decomposition. This analysis indicates that a secular trend and variability related to the El Niño–Southern Oscillation (ENSO) are the dominant modes of variance shared among the global datasets. For the SST data the secular trend corresponds to a positive trend in Indian Ocean and South Atlantic SSTs, and a negative trend in North Pacific and North Atlantic SSTs. The ENSO reconstruction shows a strong signal in the tropical Pacific, North Pacific, and Indian Ocean regions. For the PDSI data, the secular trend reconstruction shows high amplitudes over central Africa including the Sahel, whereas the regions with strong ENSO amplitudes in PDSI are the southwestern and northwestern United States, South Africa, northeastern Brazil, central Africa, the Indian subcontinent, and Australia. An additional significant frequency, multidecadal variability, is identified for the Northern Hemisphere. This multidecadal frequency appears to be related to the Atlantic multidecadal oscillation (AMO). The multidecadal frequency is statistically significant in the Northern Hemisphere SST data, but is statistically nonsignificant in the PDSI data.

  7. Metaheuristic optimisation methods for approximate solving of singular boundary value problems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong

    2017-07-01

    This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.

  8. Circular geodesics of naked singularities in the Kehagias-Sfetsos metric of Hořava's gravity

    NASA Astrophysics Data System (ADS)

    Vieira, Ronaldo S. S.; Schee, Jan; Kluźniak, Włodek; Stuchlík, Zdeněk; Abramowicz, Marek

    2014-07-01

    We discuss photon and test-particle orbits in the Kehagias-Sfetsos (KS) metric of Hořava's gravity. For any value of the Hořava parameter ω, there are values of the gravitational mass M for which the metric describes a naked singularity, and this is always accompanied by a vacuum "antigravity sphere" on whose surface a test particle can remain at rest (in a zero angular momentum geodesic), and inside which no circular geodesics exist. The observational appearance of an accreting KS naked singularity in a binary system would be that of a quasistatic spherical fluid shell surrounded by an accretion disk, whose properties depend on the value of M, but are always very different from accretion disks familiar from the Kerr-metric solutions. The properties of the corresponding circular orbits are qualitatively similar to those of the Reissner-Nordström naked singularities. When event horizons are present, the orbits outside the Kehagias-Sfetsos black hole are qualitatively similar to those of the Schwarzschild metric.

  9. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  10. In-Flight Alignment Using H ∞ Filter for Strapdown INS on Aircraft

    PubMed Central

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition. PMID:24511300

  11. Long-term survey of lion-roar emissions inside the terrestrial magnetosheath obtained from the STAFF-SA measurements onboard the Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Pisa, D.; Krupar, V.; Kruparova, O.; Santolik, O.

    2017-12-01

    Intense whistler-mode emissions known as 'lion-roars' are often observed inside the terrestrial magnetosheath, where the solar wind plasma flow slows down, and the local magnetic field increases ahead of a planetary magnetosphere. Plasma conditions in this transient region lead to the electron temperature anisotropy, which can result in the whistler-mode waves. The lion-roars are narrow-band emissions with typical frequencies between 0.1-0.5 Fce, where Fce is the electron cyclotron frequency. We present results of a long-term survey obtained by the Spatio Temporal Analysis Field Fluctuations - Spectral Analyzer (STAFF-SA) instruments on board the four Cluster spacecraft between 2001 and 2010. We have visually identified the time-frequency intervals with the intense lion-roar signature. Using the Singular Value Decomposition (SVD) method, we analyzed the wave propagation properties. We show the spatial, frequency and wave power distributions. Finally, the wave properties as a function of upstream solar wind conditions are discussed.

  12. Adaptive matching of the iota ring linear optics for space charge compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, A.; Bruhwiler, D. L.; Cook, N.

    Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a searchmore » for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters« less

  13. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  14. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  15. Data analysis of photon beam position at PLS-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, J.; Shin, S., E-mail: tlssh@postech.ac.kr; Huang, Jung-Yun

    In the third generation light source, photon beam position stability is critical issue on user experiment. Generally photon beam position monitors have been developed for the detection of the real photon beam position and the position is controlled by feedback system in order to keep the reference photon beam position. In the PLS-II, photon beam position stability for front end of particular beam line, in which photon beam position monitor is installed, has been obtained less than rms 1μm for user service period. Nevertheless, detail analysis for photon beam position data in order to demonstrate the performance of photon beammore » position monitor is necessary, since it can be suffers from various unknown noises. (for instance, a back ground contamination due to upstream or downstream dipole radiation, undulator gap dependence, etc.) In this paper, we will describe the start to end study for photon beam position stability and the Singular Value Decomposition (SVD) analysis to demonstrate the reliability on photon beam position data.« less

  16. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  17. Variability common to first leaf dates and snowpack in the western conterminous United States

    USGS Publications Warehouse

    McCabe, Gregory J.; Betancourt, Julio L.; Pederson, Gregory T.; Schwartz, Mark D.

    2013-01-01

    Singular value decomposition is used to identify the common variability in first leaf dates (FLDs) and 1 April snow water equivalent (SWE) for the western United States during the period 1900–2012. Results indicate two modes of joint variability that explain 57% of the variability in FLD and 69% of the variability in SWE. The first mode of joint variability is related to widespread late winter–spring warming or cooling across the entire west. The second mode can be described as a north–south dipole in temperature for FLD, as well as in cool season temperature and precipitation for SWE, that is closely correlated to the El Niño–Southern Oscillation. Additionally, both modes of variability indicate a relation with the Pacific–North American atmospheric pattern. These results indicate that there is a substantial amount of common variance in FLD and SWE that is related to large-scale modes of climate variability.

  18. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  19. Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.

    PubMed

    Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne

    2005-04-15

    The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.

  20. Using chaotic forcing to detect damage in a structure

    USGS Publications Warehouse

    Moniz, L.; Nichols, J.; Trickey, S.; Seaver, M.; Pecora, D.; Pecora, L.

    2005-01-01

    In this work we develop a numerical test for Holder continuity and apply it and another test for continuity to the difficult problem of detecting damage in structures. We subject a thin metal plate with incremental damage to the plate changes, its filtering properties, and therefore the phase space trajectories of the response chaotic excitation of various bandwidths. Damage to the plate changes its filtering properties and therefore the phase space of the response. Because the data are multivariate (the plate is instrumented with multiple sensors) we use a singular value decomposition of the set of the output time series to reduce the embedding dimension of the response time series. We use two geometric tests to compare an attractor reconstructed from data from an undamaged structure to that reconstructed from data from a damaged structure. These two tests translate to testing for both generalized and differentiable synchronization between responses. We show loss of synchronization of responses with damage to the structure. ?? 2005 American Institute of Physics.

  1. Using chaotic forcing to detect damage in a structure.

    USGS Publications Warehouse

    Moniz, L.; Nichols, J.; Trickey, S.; Seaver, M.; Pecora, D.; Pecora, L.

    2005-01-01

    In this work we develop a numerical test for Holder continuity and apply it and another test for continuity to the difficult problem of detecting damage in structures. We subject a thin metal plate with incremental damage to the plate changes, its filtering properties, and therefore the phase space trajectories of the response chaotic excitation of various bandwidths. Damage to the plate changes its filtering properties and therefore the phase space of the response. Because the data are multivariate (the plate is instrumented with multiple sensors) we use a singular value decomposition of the set of the output time series to reduce the embedding dimension of the response time series. We use two geometric tests to compare an attractor reconstructed from data from an undamaged structure to that reconstructed from data from a damaged structure. These two tests translate to testing for both generalized and differentiable synchronization between responses. We show loss of synchronization of responses with damage to the structure.

  2. Weak Magnetic Fields in Two Herbig Ae Systems: The SB2 AK Sco and the Presumed Binary HD 95881

    NASA Astrophysics Data System (ADS)

    Järvinen, S. P.; Carroll, T. A.; Hubrig, S.; Ilyin, I.; Schöller, M.; Castelli, F.; Hummel, C. A.; Petr-Gotzens, M. G.; Korhonen, H.; Weigelt, G.; Pogodin, M. A.; Drake, N. A.

    2018-05-01

    We report the detection of weak mean longitudinal magnetic fields in the Herbig Ae double-lined spectroscopic binary AK Sco and in the presumed spectroscopic Herbig Ae binary HD 95881 using observations with the High Accuracy Radial velocity Planet Searcher polarimeter (HARPSpol) attached to the European Southern Observatory’s (ESO’s) 3.6 m telescope. Employing a multi-line singular value decomposition method, we detect a mean longitudinal magnetic field < {B}{{z}}> =-83+/- 31 G in the secondary component of AK Sco on one occasion. For HD 95881, we measure < {B}{{z}}> =-93+/- 25 G and < {B}{{z}}> =105+/- 29 G at two different observing epochs. For all the detections the false alarm probability is smaller than 10‑5. For AK Sco system, we discover that accretion diagnostic Na I doublet lines and photospheric lines show intensity variations over the observing nights. The double-lined spectral appearance of HD 95881 is presented here for the first time.

  3. The phylogeny of swimming kinematics: The environment controls flagellar waveforms in sperm motility

    NASA Astrophysics Data System (ADS)

    Guasto, Jeffrey; Burton, Lisa; Zimmer, Richard; Hosoi, Anette; Stocker, Roman

    2013-11-01

    In recent years, phylogenetic and molecular analyses have dominated the study of ecology and evolution. However, physical interactions between organisms and their environment, a fundamental determinant of organism ecology and evolution, are mediated by organism form and function, highlighting the need to understand the mechanics of basic survival strategies, including locomotion. Focusing on spermatozoa, we combined high-speed video microscopy and singular value decomposition analysis to quantitatively compare the flagellar waveforms of eight species, ranging from marine invertebrates to humans. We found striking similarities in sperm swimming kinematics between genetically dissimilar organisms, which could not be uncovered by phylogenetic analysis. The emergence of dominant waveform patterns across species are suggestive of biological optimization for flagellar locomotion and point toward environmental cues as drivers of this convergence. These results reinforce the power of quantitative kinematic analysis to understand the physical drivers of evolution and as an approach to uncover new solutions for engineering applications, such as micro-robotics.

  4. Prioritization of Disease Susceptibility Genes Using LSM/SVD.

    PubMed

    Gong, Lejun; Yang, Ronggen; Yan, Qin; Sun, Xiao

    2013-12-01

    Understanding the role of genetics in diseases is one of the most important tasks in the postgenome era. It is generally too expensive and time consuming to perform experimental validation for all candidate genes related to disease. Computational methods play important roles for prioritizing these candidates. Herein, we propose an approach to prioritize disease genes using latent semantic mapping based on singular value decomposition. Our hypothesis is that similar functional genes are likely to cause similar diseases. Measuring the functional similarity between known disease susceptibility genes and unknown genes is to predict new disease susceptibility genes. Taking autism as an instance, the analysis results of the top ten genes prioritized demonstrate they might be autism susceptibility genes, which also indicates our approach could discover new disease susceptibility genes. The novel approach of disease gene prioritization could discover new disease susceptibility genes, and latent disease-gene relations. The prioritized results could also support the interpretive diversity and experimental views as computational evidence for disease researchers.

  5. C-2W Magnetic Measurement Suite

    NASA Astrophysics Data System (ADS)

    Roche, T.; Thompson, M. C.; Griswold, M.; Knapp, K.; Koop, B.; Ottaviano, A.; Tobin, M.; TAE, Tri Alpha Energy, Inc. Team

    2017-10-01

    Commissioning and early operations are underway on C-2W, Tri Alpha Energy's new FRC experiment. The increased complexity level of this machine requires an equally enhanced diagnostic capability. A fundamental component of any magnetically confined fusion experiment is a firm understanding of the magnetic field itself. C-2W is outfitted with over 700 magnetic field probes, 550 internal and 150 external. Innovative in-vacuum annular flux loop / B-dot combination probes will provide information about plasma shape, size, pressure, energy, total temperature, and trapped flux when coupled with establish theoretical interpretations. The massive Mirnov array, consisting of eight rings of eight 3D probes, will provide detailed information about plasma motion, stability, and MHD modal content with the aid of singular value decomposition (SVD) analysis. Internal Rogowski probes will detect the presence of axial currents flowing in the plasma jet in multiple axial locations. Initial data from this array of diagnostics will be presented along with some interpretation and discussion of the analysis techniques used.

  6. Magnetic evaluation of hydrogen pressures changes on MHD fluctuations in IR-T1 tokamak plasma

    NASA Astrophysics Data System (ADS)

    Alipour, Ramin; Ghanbari, Mohamad R.

    2018-04-01

    Identification of tokamak plasma parameters and investigation on the effects of each parameter on the plasma characteristics is important for the better understanding of magnetohydrodynamic (MHD) activities in the tokamak plasma. The effect of different hydrogen pressures of 1.9, 2.5 and 2.9 Torr on MHD fluctuations of the IR-T1 tokamak plasma was investigated by using of 12 Mirnov coils, singular value decomposition and wavelet analysis. The parameters such as plasma current, loop voltage, power spectrum density, energy percent of poloidal modes, dominant spatial structures and temporal structures of poloidal modes at different plasma pressures are plotted. The results indicate that the MHD activities at the pressure of 2.5 Torr are less than them at other pressures. It also has been shown that in the stable area of plasma and at the pressure of 2.5 Torr, the magnetic force and the force of plasma pressure are in balance with each other and the MHD activities are at their lowest level.

  7. Mode detection in turbofan inlets from near field sensor arrays.

    PubMed

    Castres, Fabrice O; Joseph, Phillip F

    2007-02-01

    Knowledge of the modal content of the sound field radiated from a turbofan inlet is important for source characterization and for helping to determine noise generation mechanisms in the engine. An inverse technique for determining the mode amplitudes at the duct outlet is proposed using pressure measurements made in the near field. The radiated sound pressure from a duct is modeled by directivity patterns of cut-on modes in the near field using a model based on the Kirchhoff approximation for flanged ducts with no flow. The resulting system of equations is ill posed and it is shown that the presence of modes with eigenvalues close to a cutoff frequency results in a poorly conditioned directivity matrix. An analysis of the conditioning of this directivity matrix is carried out to assess the inversion robustness and accuracy. A physical interpretation of the singular value decomposition is given and allows us to understand the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array.

  8. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  9. Ajustement statistique des simulations climatiques : l'exemple des précipitations saisonnières de l'Amérique tropicaleStatistical adjustment of simulated climate: example of seasonal rainfall of tropical America.

    NASA Astrophysics Data System (ADS)

    Moron, Vincent; Navarra, Antonio

    2000-05-01

    This study presents the skill of the seasonal rainfall of tropical America from an ensemble of three 34-year general circulation model (ECHAM 4) simulations forced with observed sea surface temperature between 1961 and 1994. The skill gives a first idea of the amount of potential predictability if the sea surface temperatures are perfectly known some time in advance. We use statistical post-processing based on the leading modes (extracted from Singular Value Decomposition of the covariance matrix between observed and simulated rainfall fields) to improve the raw skill obtained by simple comparison between observations and simulations. It is shown that 36-55 % of the observed seasonal variability is explained by the simulations on a regional basis. Skill is greatest for Brazilian Nordeste (March-May), but also for northern South America or the Caribbean basin in June-September or northern Amazonia in September-November for example.

  10. Interannual Rainfall Variability in North-East Brazil: Observation and Model Simulation

    NASA Astrophysics Data System (ADS)

    Harzallah, A.; Rocha de Aragão, J. O.; Sadourny, R.

    1996-08-01

    The relationship between interannual variability of rainfall in north-east Brazil and tropical sea-surface temperature is studied using observations and model simulations. The simulated precipitation is the average of seven independent realizations performed using the Laboratoire de Météorologie Dynamique atmospheric general model forced by the 1970-1988 observed sea-surface temperature. The model reproduces very well the rainfall anomalies (correlation of 091 between observed and modelled anomalies). The study confirms that precipitation in north-east Brazil is highly correlated to the sea-surface temperature in the tropical Atlantic and Pacific oceans. Using the singular value decomposition method, we find that Nordeste rainfall is modulated by two independent oscillations, both governed by the Atlantic dipole, but one involving only the Pacific, the other one having a period of about 10 years. Correlations between precipitation in north-east Brazil during February-May and the sea-surface temperature 6 months earlier indicate that both modes are essential to estimate the quality of the rainy season.

  11. A tripolar pattern as an internal mode of the East Asian summer monsoon

    NASA Astrophysics Data System (ADS)

    Hirota, Nagio; Takahashi, Masaaki

    2012-11-01

    A tripolar anomaly pattern with centers located around the Philippines, China/Japan, and East Siberia dominantly appears in climate variations of the East Asian summer monsoon. In this study, we extracted this pattern as the first mode of a singular value decomposition (SVD1) over East Asia. The squared covariance fraction of SVD1 was 59 %, indicating that this pattern can be considered a dominant pattern of climate variations. Moreover, the results of numerical experiments suggested that the structure is also a dominant pattern of linear responses, even if external forcing is distributed homogeneously over the Northern Hemisphere. Thus, the tripolar pattern can be considered an internal mode that is characterized by the internal atmospheric processes. In this pattern, the moist processes strengthen the circulation anomalies, the dynamical energy conversion supplies energy to the anomalies, and the Rossby waves propagate northward in the lower troposphere and southeastward in the upper troposphere. These processes are favorable for the pattern to have large amplitude and to influence a large area.

  12. The semantic representation of prejudice and stereotypes.

    PubMed

    Bhatia, Sudeep

    2017-07-01

    We use a theory of semantic representation to study prejudice and stereotyping. Particularly, we consider large datasets of newspaper articles published in the United States, and apply latent semantic analysis (LSA), a prominent model of human semantic memory, to these datasets to learn representations for common male and female, White, African American, and Latino names. LSA performs a singular value decomposition on word distribution statistics in order to recover word vector representations, and we find that our recovered representations display the types of biases observed in human participants using tasks such as the implicit association test. Importantly, these biases are strongest for vector representations with moderate dimensionality, and weaken or disappear for representations with very high or very low dimensionality. Moderate dimensional LSA models are also the best at learning race, ethnicity, and gender-based categories, suggesting that social category knowledge, acquired through dimensionality reduction on word distribution statistics, can facilitate prejudiced and stereotyped associations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Face recognition using tridiagonal matrix enhanced multivariance products representation

    NASA Astrophysics Data System (ADS)

    Ã-zay, Evrim Korkmaz

    2017-01-01

    This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.

  15. SVD compression for magnetic resonance fingerprinting in the time domain.

    PubMed

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  16. Dynamic Analysis and Control of Lightweight Manipulators with Flexible Parallel Link Mechanisms. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lee, Jeh Won

    1990-01-01

    The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.

  17. Feedback process responsible for intermodel diversity of ENSO variability

    NASA Astrophysics Data System (ADS)

    An, Soon-Il; Heo, Eun Sook; Kim, Seon Tae

    2017-05-01

    The origin of the intermodel diversity of the El Niño-Southern Oscillation (ENSO) variability is investigated by applying a singular value decomposition (SVD) analysis between the intermodel tropical Pacific sea surface temperature anomalies (SSTA) variance and the intermodel ENSO stability index (BJ index). The first SVD mode features an ENSO-like pattern for the intermodel SSTA variance (74% of total variance) and the dominant thermocline feedback (TH) for the BJ index (51%). Intermodel TH is mainly modified by the intermodel sensitivity of the zonal thermocline gradient response to zonal winds over the equatorial Pacific (βh), and the intermodel βh is correlated higher with the intermodel off-equatorial wind stress curl anomalies than the equatorial zonal wind stress anomalies. Finally, the intermodel off-equatorial wind stress curl is associated with the meridional shape and intensity of ENSO-related wind patterns, which may cause a model-to-model difference in ENSO variability by influencing the off-equatorial oceanic Rossby wave response.

  18. Genetic Algorithm for Opto-thermal Skin Hydration Depth Profiling Measurements

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Xiao, Perry; Imhof, R. E.

    2013-09-01

    Stratum corneum is the outermost skin layer, and the water content in stratum corneum plays a key role in skin cosmetic properties as well as skin barrier functions. However, to measure the water content, especially the water concentration depth profile, within stratum corneum is very difficult. Opto-thermal emission radiometry, or OTTER, is a promising technique that can be used for such measurements. In this paper, a study on stratum corneum hydration depth profiling by using a genetic algorithm (GA) is presented. The pros and cons of a GA compared against other inverse algorithms such as neural networks, maximum entropy, conjugate gradient, and singular value decomposition will be discussed first. Then, it will be shown how to use existing knowledge to optimize a GA for analyzing the opto-thermal signals. Finally, these latest GA results on hydration depth profiling of stratum corneum under different conditions, as well as on the penetration profiles of externally applied solvents, will be shown.

  19. Reaction trajectory revealed by a joint analysis of protein data bank.

    PubMed

    Ren, Zhong

    2013-01-01

    Structural motions along a reaction pathway hold the secret about how a biological macromolecule functions. If each static structure were considered as a snapshot of the protein molecule in action, a large collection of structures would constitute a multidimensional conformational space of an enormous size. Here I present a joint analysis of hundreds of known structures of human hemoglobin in the Protein Data Bank. By applying singular value decomposition to distance matrices of these structures, I demonstrate that this large collection of structural snapshots, derived under a wide range of experimental conditions, arrange orderly along a reaction pathway. The structural motions along this extensive trajectory, including several helical transformations, arrive at a reverse engineered mechanism of the cooperative machinery (Ren, companion article), and shed light on pathological properties of the abnormal homotetrameric hemoglobins from α-thalassemia. This method of meta-analysis provides a general approach to structural dynamics based on static protein structures in this post genomics era.

  20. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  1. Full-envelope aerodynamic modeling of the Harrier aircraft

    NASA Technical Reports Server (NTRS)

    Mcnally, B. David

    1986-01-01

    A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.

  2. Real-time deblurring of handshake blurred images on smartphones

    NASA Astrophysics Data System (ADS)

    Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser

    2015-02-01

    This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.

  3. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    NASA Astrophysics Data System (ADS)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  4. Realtime Multichannel System for Beat to Beat QT Interval Variability

    NASA Technical Reports Server (NTRS)

    Starc, Vito; Schlegel, Todd T.

    2006-01-01

    The measurement of beat-to-beat QT interval variability (QTV) shows clinical promise for identifying several types of cardiac pathology. However, until now, there has been no device capable of displaying, in real time on a beattobeat basis, changes in QTV in all 12 conventional leads in a continuously monitored patient. While several software programs have been designed to analyze QTV, heretofore, such programs have all involved only a few channels (at most) and/or have required laborious user interaction or offline calculations and postprocessing, limiting their clinical utility. This paper describes a PC-based ECG software program that in real time, acquires, analyzes and displays QTV and also PQ interval variability (PQV) in each of the eight independent channels that constitute the 12lead conventional ECG. The system also processes certain related signals that are derived from singular value decomposition and that help to reduce the overall effects of noise on the realtime QTV and PQV results.

  5. Interactive graphical system for small-angle scattering analysis of polydisperse systems

    NASA Astrophysics Data System (ADS)

    Konarev, P. V.; Volkov, V. V.; Svergun, D. I.

    2016-09-01

    A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.

  6. Analysis of protein circular dichroism spectra for secondary structure using a simple matrix multiplication.

    PubMed

    Compton, L A; Johnson, W C

    1986-05-15

    Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.

  7. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  8. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles

    PubMed Central

    Wang, Wei; Chen, Xiyuan

    2018-01-01

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm. PMID:29473912

  9. Reaction Trajectory Revealed by a Joint Analysis of Protein Data Bank

    PubMed Central

    Ren, Zhong

    2013-01-01

    Structural motions along a reaction pathway hold the secret about how a biological macromolecule functions. If each static structure were considered as a snapshot of the protein molecule in action, a large collection of structures would constitute a multidimensional conformational space of an enormous size. Here I present a joint analysis of hundreds of known structures of human hemoglobin in the Protein Data Bank. By applying singular value decomposition to distance matrices of these structures, I demonstrate that this large collection of structural snapshots, derived under a wide range of experimental conditions, arrange orderly along a reaction pathway. The structural motions along this extensive trajectory, including several helical transformations, arrive at a reverse engineered mechanism of the cooperative machinery (Ren, companion article), and shed light on pathological properties of the abnormal homotetrameric hemoglobins from α-thalassemia. This method of meta-analysis provides a general approach to structural dynamics based on static protein structures in this post genomics era. PMID:24244274

  10. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  11. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Gunther G., E-mail: gunther.andersson@flinders.edu.au, E-mail: vladimir.golovko@canterbury.ac.nz, E-mail: greg.metha@adelaide.edu.au; Al Qahtani, Hassan S.; Golovko, Vladimir B., E-mail: gunther.andersson@flinders.edu.au, E-mail: vladimir.golovko@canterbury.ac.nz, E-mail: greg.metha@adelaide.edu.au

    Chemically made, atomically precise phosphine-stabilized clusters Au{sub 9}(PPh{sub 3}){sub 8}(NO{sub 3}){sub 3} were deposited on titania and silica from solutions at various concentrations and the samples heated under vacuum to remove the ligands. Metastable induced electron spectroscopy was used to determine the density of states at the surface, and X-ray photoelectron spectroscopy for analysing the composition of the surface. It was found for the Au{sub 9} cluster deposited on titania that the ligands react with the titania substrate. Based on analysis using the singular value decomposition algorithm, the series of MIE spectra can be described as a linear combination ofmore » 3 base spectra that are assigned to the spectra of the substrate, the phosphine ligands on the substrate, and the Au clusters anchored to titania after removal of the ligands. On silica, the Au clusters show significant agglomeration after heat treatment and no interaction of the ligands with the substrate can be identified.« less

  13. Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling

    DOE PAGES

    Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...

    2014-07-14

    Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less

  14. Loops and Self-Reference in the Construction of Dictionaries

    NASA Astrophysics Data System (ADS)

    Levary, David; Eckmann, Jean-Pierre; Moses, Elisha; Tlusty, Tsvi

    2012-07-01

    Dictionaries link a given word to a set of alternative words (the definition) which in turn point to further descendants. Iterating through definitions in this way, one typically finds that definitions loop back upon themselves. We demonstrate that such definitional loops are created in order to introduce new concepts into a language. In contrast to the expectations for a random lexical network, in graphs of the dictionary, meaningful loops are quite short, although they are often linked to form larger, strongly connected components. These components are found to represent distinct semantic ideas. This observation can be quantified by a singular value decomposition, which uncovers a set of conceptual relationships arising in the global structure of the dictionary. Finally, we use etymological data to show that elements of loops tend to be added to the English lexicon simultaneously and incorporate our results into a simple model for language evolution that falls within the “rich-get-richer” class of network growth.

  15. A data base and analysis program for shuttle main engine dynamic pressure measurements

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1986-01-01

    A dynamic pressure data base management system is described for measurements obtained from space shuttle main engine (SSME) hot firing tests. The data were provided in terms of engine power level and rms pressure time histories, and power spectra of the dynamic pressure measurements at selected times during each test. Test measurements and engine locations are defined along with a discussion of data acquisition and reduction procedures. A description of the data base management analysis system is provided and subroutines developed for obtaining selected measurement means, variances, ranges and other statistics of interest are discussed. A summary of pressure spectra obtained at SSME rated power level is provided for reference. Application of the singular value decomposition technique to spectrum interpolation is discussed and isoplots of interpolated spectra are presented to indicate measurement trends with engine power level. Program listings of the data base management and spectrum interpolation software are given. Appendices are included to document all data base measurements.

  16. A first detection of singlet to triplet conversion from the 1 1B u- to the 1 3A g state and triplet internal conversion from the 1 3A g to the 1 3B u state in carotenoids: dependence on the conjugation length

    NASA Astrophysics Data System (ADS)

    Rondonuwu, Ferdy S.; Watanabe, Yasutaka; Fujii, Ritsuko; Koyama, Yasushi

    2003-07-01

    Subpicosecond time-resolved absorption spectra were recorded in the visible region for a set of photosynthetic carotenoids having different numbers of conjugated double bonds ( n), which include neurosporene ( n=9), spheroidene ( n=10), lycopene ( n=11), anhydrorhodovibrin ( n=12) and spirilloxanthin ( n=13). Singular-value decomposition and global fitting of the spectral-data matrices lead us to a branched relaxation scheme including both (1) the singlet internal conversion in the sequence of 1 1B u+ → 1 1B u- → 2 1A g- → 1 1A g-(ground), and (2) the singlet-to-triplet conversion of 1 1B u- → 1 3A g followed by triplet internal conversion of 1 3A g → 1 3B u.

  17. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain

    PubMed Central

    McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.

    2016-01-01

    Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380

  18. Analysis and control of the photon beam position at PLS-II

    PubMed Central

    Ko, J.; Kim, I.-Y.; Kim, C.; Kim, D.-T.; Huang, J.-Y.; Shin, S.

    2016-01-01

    At third-generation light sources, the photon beam position stability is a critical issue for user experiments. In general, photon beam position monitors are developed to detect the real photon beam position, and the position is controlled by a feedback system in order to maintain the reference photon beam position. At Pohang Light Source II, a photon beam position stability of less than 1 µm r.m.s. was achieved for a user service period in the beamline, where the photon beam position monitor is installed. Nevertheless, a detailed analysis of the photon beam position data was necessary in order to ensure the performance of the photon beam position monitor, since it can suffer from various unknown types of noise, such as background contamination due to upstream or downstream dipole radiation, and undulator gap dependence. This paper reports the results of a start-to-end study of the photon beam position stability and a singular value decomposition analysis to confirm the reliability of the photon beam position data. PMID:26917132

  19. Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion

    NASA Astrophysics Data System (ADS)

    Hutterer, Victoria; Ramlau, Ronny

    2018-03-01

    The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.

  20. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  1. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  2. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  3. Applications of a Novel Clustering Approach Using Non-Negative Matrix Factorization to Environmental Research in Public Health

    PubMed Central

    Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S. Stanley

    2016-01-01

    Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability. PMID:27213413

  4. Applications of a Novel Clustering Approach Using Non-Negative Matrix Factorization to Environmental Research in Public Health.

    PubMed

    Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S Stanley

    2016-05-18

    Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability.

  5. Unattainable extended spacetime regions in conformal gravity

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Hrishikesh; Benavides-Gallego, Carlos A.; Bambi, Cosimo; Modesto, Leonardo

    2018-03-01

    The Janis-Newman-Winicour metric is a solution of Einstein's gravity minimally coupled to a real massless scalar field. The γ-metric is instead a vacuum solution of Einstein's gravity. Both spacetimes have no horizon and possess a naked singularity at a finite value of the radial coordinate, where curvature invariants diverge and the spacetimes are geodetically incomplete. In this paper, we reconsider these solutions in the framework of conformal gravity and we show that it is possible to solve the spacetime singularities with a suitable choice of the conformal factor. Now curvature invariants remain finite over the whole spacetime. Massive particles never reach the previous singular surface and massless particles can never do it with a finite value of their affine parameter. Our results support the conjecture according to which conformal gravity can fix the singularity problem that plagues Einstein's gravity.

  6. Simulation And Forecasting of Daily Pm10 Concentrations Using Autoregressive Models In Kagithane Creek Valley, Istanbul

    NASA Astrophysics Data System (ADS)

    Ağaç, Kübra; Koçak, Kasım; Deniz, Ali

    2015-04-01

    A time series approach using autoregressive model (AR), moving average model (MA) and seasonal autoregressive integrated moving average model (SARIMA) were used in this study to simulate and forecast daily PM10 concentrations in Kagithane Creek Valley, Istanbul. Hourly PM10 concentrations have been measured in Kagithane Creek Valley between 2010 and 2014 periods. Bosphorus divides the city in two parts as European and Asian parts. The historical part of the city takes place in Golden Horn. Our study area Kagithane Creek Valley is connected with this historical part. The study area is highly polluted because of its topographical structure and industrial activities. Also population density is extremely high in this site. The dispersion conditions are highly poor in this creek valley so it is necessary to calculate PM10 levels for air quality and human health. For given period there were some missing PM10 concentration values so to make an accurate calculations and to obtain exact results gap filling method was applied by Singular Spectrum Analysis (SSA). SSA is a new and efficient method for gap filling and it is an state-of-art modeling. SSA-MTM Toolkit was used for our study. SSA is considered as a noise reduction algorithm because it decomposes an original time series to trend (if exists), oscillatory and noise components by way of a singular value decomposition. The basic SSA algorithm has stages of decomposition and reconstruction. For given period daily and monthly PM10 concentrations were calculated and episodic periods are determined. Long term and short term PM10 concentrations were analyzed according to European Union (EU) standards. For simulation and forecasting of high level PM10 concentrations, meteorological data (wind speed, pressure and temperature) were used to see the relationship between daily PM10 concentrations. Fast Fourier Transformation (FFT) was also applied to the data to see the periodicity and according to these periods models were built in MATLAB an Eviews programmes. Because of the seasonality of PM10 data SARIMA model was also used. The order of autoregression model was determined according to AIC and BIC criteria. The model performances were evaluated from Fractional Bias, Normalized Mean Square Error (NMSE) and Mean Absolute Percentage Error (MAPE). As expected, the results were encouraging. Keywords: PM10, Autoregression, Forecast Acknowledgement The authors would like to acknowledge the financial support by the Scientific and Technological Research Council of Turkey (TUBITAK, project no:112Y319).

  7. Stochastic local operations and classical communication (SLOCC) and local unitary operations (LU) classifications of n qubits via ranks and singular values of the spin-flipping matrices

    NASA Astrophysics Data System (ADS)

    Li, Dafa

    2018-06-01

    We construct ℓ -spin-flipping matrices from the coefficient matrices of pure states of n qubits and show that the ℓ -spin-flipping matrices are congruent and unitary congruent whenever two pure states of n qubits are SLOCC and LU equivalent, respectively. The congruence implies the invariance of ranks of the ℓ -spin-flipping matrices under SLOCC and then permits a reduction of SLOCC classification of n qubits to calculation of ranks of the ℓ -spin-flipping matrices. The unitary congruence implies the invariance of singular values of the ℓ -spin-flipping matrices under LU and then permits a reduction of LU classification of n qubits to calculation of singular values of the ℓ -spin-flipping matrices. Furthermore, we show that the invariance of singular values of the ℓ -spin-flipping matrices Ω 1^{(n)} implies the invariance of the concurrence for even n qubits and the invariance of the n-tangle for odd n qubits. Thus, the concurrence and the n-tangle can be used for LU classification and computing the concurrence and the n-tangle only performs additions and multiplications of coefficients of states.

  8. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  9. Singular boundary value problem for the integrodifferential equation in an insurance model with stochastic premiums: Analysis and numerical solution

    NASA Astrophysics Data System (ADS)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2012-10-01

    A singular boundary value problem for a second-order linear integrodifferential equation with Volterra and non-Volterra integral operators is formulated and analyzed. The equation is defined on ℝ+, has a weak singularity at zero and a strong singularity at infinity, and depends on several positive parameters. Under natural constraints on the coefficients of the equation, existence and uniqueness theorems for this problem with given limit boundary conditions at singular points are proved, asymptotic representations of the solution are given, and an algorithm for its numerical determination is described. Numerical computations are performed and their interpretation is given. The problem arises in the study of the survival probability of an insurance company over infinite time (as a function of its initial surplus) in a dynamic insurance model that is a modification of the classical Cramer-Lundberg model with a stochastic process rate of premium under a certain investment strategy in the financial market. A comparative analysis of the results with those produced by the model with deterministic premiums is given.

  10. Big bounce with finite-time singularity: The F(R) gravity description

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.

    An alternative to the Big Bang cosmologies is obtained by the Big Bounce cosmologies. In this paper, we study a bounce cosmology with a Type IV singularity occurring at the bouncing point in the context of F(R) modified gravity. We investigate the evolution of the Hubble radius and we examine the issue of primordial cosmological perturbations in detail. As we demonstrate, for the singular bounce, the primordial perturbations originating from the cosmological era near the bounce do not produce a scale-invariant spectrum and also the short wavelength modes after these exit the horizon, do not freeze, but grow linearly with time. After presenting the cosmological perturbations study, we discuss the viability of the singular bounce model, and our results indicate that the singular bounce must be combined with another cosmological scenario, or should be modified appropriately, in order that it leads to a viable cosmology. The study of the slow-roll parameters leads to the same result indicating that the singular bounce theory is unstable at the singularity point for certain values of the parameters. We also conformally transform the Jordan frame singular bounce, and as we demonstrate, the Einstein frame metric leads to a Big Rip singularity. Therefore, the Type IV singularity in the Jordan frame becomes a Big Rip singularity in the Einstein frame. Finally, we briefly study a generalized singular cosmological model, which contains two Type IV singularities, with quite appealing features.

  11. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  12. Spontaneous generation of singularities in paraxial optical fields.

    PubMed

    Aiello, Andrea

    2016-04-01

    In nonrelativistic quantum mechanics, the spontaneous generation of singularities in smooth and finite wave functions is a well understood phenomenon also occurring for free particles. We use the familiar analogy between the two-dimensional Schrödinger equation and the optical paraxial wave equation to define a new class of square-integrable paraxial optical fields that develop a spatial singularity in the focal point of a weakly focusing thin lens. These fields are characterized by a single real parameter whose value determines the nature of the singularity. This novel field enhancement mechanism may stimulate fruitful research for diverse technological and scientific applications.

  13. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  14. On the solution of integral equations with strong ly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1985-01-01

    In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  15. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  16. Matrix Sturm-Liouville equation with a Bessel-type singularity on a finite interval

    NASA Astrophysics Data System (ADS)

    Bondarenko, Natalia

    2017-03-01

    The matrix Sturm-Liouville equation on a finite interval with a Bessel-type singularity in the end of the interval is studied. Special fundamental systems of solutions for this equation are constructed: analytic Bessel-type solutions with the prescribed behavior at the singular point and Birkhoff-type solutions with the known asymptotics for large values of the spectral parameter. The asymptotic formulas for Stokes multipliers, connecting these two fundamental systems of solutions, are derived. We also set boundary conditions and obtain asymptotic formulas for the spectral data (the eigenvalues and the weight matrices) of the boundary value problem. Our results will be useful in the theory of direct and inverse spectral problems.

  17. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  18. Satellite imagery in the fight against Malaria, the case for Genetic Programming

    NASA Astrophysics Data System (ADS)

    Ssentongo, J. S.; Hines, E. L.

    The analysis of multi-temporal data is a critical issue in the field of remote sensing and presents a constant challenge The approach used here relies primarily on utilising a method commonly used in statistics and signal processing Empirical Orthogonal Function EOF analysis Normalized Difference Vegetation Index NDVI and Rainfall Estimate RFE satellite images pertaining to the Sub-Saharan Africa region were obtained The images are derived from the Advanced Very High Resolution Radiometer AVHRR on the United States National Oceanic and Atmospheric Administration NOAA polar orbiting satellites spanning from January 2000 to December 2002 The region of interest was narrowed down to the Limpopo Province Northern Province of South Africa EOF analyses of the space-time-intensity series of dekadal mean NDVI values was been performed They reveal that NDVI can be accurately approximated by its principal component time series and contains a near sinusoidal oscillation pattern Peak greenness essentially what NDVI measures seasons last approximately 8 weeks This oscillation period is very similar to that of Malaria cases reported in the same period but lags behind by 4 dekads about 40 days Singular Value Decomposition SVD of Coupled Fields is performed on the spacetime-intensity series of dekadal mean NDVI and RFE values Correlation analyses indicate that both Malaria and greenness appear to be dependant on rainfall the onset of their seasonal highs always following an arrival of rain There is a greater

  19. FREQ: A computational package for multivariable system loop-shaping procedures

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Armstrong, Ernest S.

    1989-01-01

    Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.

  20. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    PubMed

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

Top