Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers
NASA Astrophysics Data System (ADS)
Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.
2018-04-01
Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.
Slope angle estimation method based on sparse subspace clustering for probe safe landing
NASA Astrophysics Data System (ADS)
Li, Haibo; Cao, Yunfeng; Ding, Meng; Zhuang, Likui
2018-06-01
To avoid planetary probes landing on steep slopes where they may slip or tip over, a new method of slope angle estimation based on sparse subspace clustering is proposed to improve accuracy. First, a coordinate system is defined and established to describe the measured data of light detection and ranging (LIDAR). Second, this data is processed and expressed with a sparse representation. Third, on this basis, the data is made to cluster to determine which subspace it belongs to. Fourth, eliminating outliers in subspace, the correct data points are used for the fitting planes. Finally, the vectors normal to the planes are obtained using the plane model, and the angle between the normal vectors is obtained through calculation. Based on the geometric relationship, this angle is equal in value to the slope angle. The proposed method was tested in a series of experiments. The experimental results show that this method can effectively estimate the slope angle, can overcome the influence of noise and obtain an exact slope angle. Compared with other methods, this method can minimize the measuring errors and further improve the estimation accuracy of the slope angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
A Channelization-Based DOA Estimation Method for Wideband Signals
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua
2018-01-01
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797
Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua
2018-05-06
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less
An estimating equation approach to dimension reduction for longitudinal data
Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li
2016-01-01
Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956
Factor analysis of auto-associative neural networks with application in speaker verification.
Garimella, Sri; Hermansky, Hynek
2013-04-01
Auto-associative neural network (AANN) is a fully connected feed-forward neural network, trained to reconstruct its input at its output through a hidden compression layer, which has fewer numbers of nodes than the dimensionality of input. AANNs are used to model speakers in speaker verification, where a speaker-specific AANN model is obtained by adapting (or retraining) the universal background model (UBM) AANN, an AANN trained on multiple held out speakers, using corresponding speaker data. When the amount of speaker data is limited, this adaptation procedure may lead to overfitting as all the parameters of UBM-AANN are adapted. In this paper, we introduce and develop the factor analysis theory of AANNs to alleviate this problem. We hypothesize that only the weight matrix connecting the last nonlinear hidden layer and the output layer is speaker-specific, and further restrict it to a common low-dimensional subspace during adaptation. The subspace is learned using large amounts of development data, and is held fixed during adaptation. Thus, only the coordinates in a subspace, also known as i-vector, need to be estimated using speaker-specific data. The update equations are derived for learning both the common low-dimensional subspace and the i-vectors corresponding to speakers in the subspace. The resultant i-vector representation is used as a feature for the probabilistic linear discriminant analysis model. The proposed system shows promising results on the NIST-08 speaker recognition evaluation (SRE), and yields a 23% relative improvement in equal error rate over the previously proposed weighted least squares-based subspace AANNs system. The experiments on NIST-10 SRE confirm that these improvements are consistent and generalize across datasets.
Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors
NASA Astrophysics Data System (ADS)
Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz
2016-04-01
The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue, therefore providing an evaluation of network performance and efficiency of the method.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Conditioned invariant subspaces, disturbance decoupling and solutions of rational matrix equations
NASA Technical Reports Server (NTRS)
Li, Z.; Sastry, S. S.
1986-01-01
Conditioned invariant subspaces are introduced both in terms of output injection and in terms of state estimation. Various properties of these subspaces are explored and the problem of disturbance decoupling by output injection (OIP) is defined. It is then shown that OIP is equivalent to the problem of disturbance decoupled estimation as introduced in Willems (1982) and Willems and Commault (1980). Both solvability conditions and a description of solutions for a class of rational matrix equations of the form X(s)M(s) = Q(s) on several ways are given in state-space form. Finally, the problem of output stabilization with respect to a disturbance is briefly addressed.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Welch, Greg
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
2018-01-01
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
EEG and MEG source localization using recursively applied (RAP) MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less
Mode Shape Estimation Algorithms Under Ambient Conditions: A Comparative Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosiek, Luke; Zhou, Ning; Pierre, John W.
Abstract—This paper provides a comparative review of five existing ambient electromechanical mode shape estimation algorithms, i.e., the Transfer Function (TF), Spectral, Frequency Domain Decomposition (FDD), Channel Matching, and Subspace Methods. It is also shown that the TF Method is a general approach to estimating mode shape and that the Spectral, FDD, and Channel Matching Methods are actually special cases of it. Additionally, some of the variations of the Subspace Method are reviewed and the Numerical algorithm for Subspace State Space System IDentification (N4SID) is implemented. The five algorithms are then compared using data simulated from a 17-machine model of themore » Western Electricity Coordinating Council (WECC) under ambient conditions with both low and high damping, as well as during the case where ambient data is disrupted by an oscillatory ringdown. The performance of the algorithms is compared using the statistics from Monte Carlo Simulations and results from measured WECC data, and a discussion of the practical issues surrounding their implementation, including cases where power system probing is an option, is provided. The paper concludes with some recommendations as to the appropriate use of the various techniques. Index Terms—Electromechanical mode shape, small-signal stability, phasor measurement units (PMU), system identification, N4SID, subspace.« less
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Observation of entanglement witnesses for orbital angular momentum states
NASA Astrophysics Data System (ADS)
Agnew, M.; Leach, J.; Boyd, R. W.
2012-06-01
Entanglement witnesses provide an efficient means of determining the level of entanglement of a system using the minimum number of measurements. Here we demonstrate the observation of two-dimensional entanglement witnesses in the high-dimensional basis of orbital angular momentum (OAM). In this case, the number of potentially entangled subspaces scales as d(d - 1)/2, where d is the dimension of the space. The choice of OAM as a basis is relevant as each subspace is not necessarily maximally entangled, thus providing the necessary state for certain tests of nonlocality. The expectation value of the witness gives an estimate of the state of each two-dimensional subspace belonging to the d-dimensional Hilbert space. These measurements demonstrate the degree of entanglement and therefore the suitability of the resulting subspaces for quantum information applications.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods
NASA Astrophysics Data System (ADS)
Yoon, Yeo-Sun; Amin, Moeness G.
2008-04-01
Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Siqian; Kuang, Gangyao
2014-10-01
In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-01-01
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313
A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-02-25
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.
Locally indistinguishable subspaces spanned by three-qubit unextendible product bases
NASA Astrophysics Data System (ADS)
Duan, Runyao; Xin, Yu; Ying, Mingsheng
2010-03-01
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that the local distinguishability of multiqubit states can be decided efficiently. Another useful consequence is that a set of orthogonal n-qubit states is locally distinguishable only if the summation of their orthogonal Schmidt numbers is less than the total dimension 2n. Employing these results, we show that any orthonormal basis of a subspace spanned by arbitrary three-qubit orthogonal unextendible product bases (UPB) cannot be exactly distinguishable by local operations and classical communication. This not only reveals another intrinsic property of three-qubit orthogonal UPB but also provides a class of locally indistinguishable subspaces with dimension 4. We also explicitly construct locally indistinguishable subspaces with dimensions 3 and 5, respectively. Similar to the bipartite case, these results on multipartite locally indistinguishable subspaces can be used to estimate the one-shot environment-assisted classical capacity of a class of quantum broadcast channels.
Nam, HyungSoo; Choi, ByungGil; Oh, Daegun
2018-01-01
In this paper, a three-dimensional (3D)-subspace-based azimuth angle, elevation angle, and range estimation method with auto-pairing is proposed for frequency-modulated continuous waveform (FMCW) radar with an L-shaped array. The proposed method is designed to exploit the 3D shift-invariant structure of the stacked Hankel snapshot matrix for auto-paired azimuth angle, elevation angle, and range estimation. The effectiveness of the proposed method is verified through a variety of experiments conducted in a chamber. For the realization of the proposed method, K-band FMCW radar is implemented with an L-shaped antenna. PMID:29621193
Subspace methods for identification of human ankle joint stiffness.
Zhao, Y; Westwick, D T; Kearney, R E
2011-11-01
Joint stiffness, the dynamic relationship between the angular position of a joint and the torque acting about it, describes the dynamic, mechanical behavior of a joint during posture and movement. Joint stiffness arises from both intrinsic and reflex mechanisms, but the torques due to these mechanisms cannot be measured separately experimentally, since they appear and change together. Therefore, the direct estimation of the intrinsic and reflex stiffnesses is difficult. In this paper, we present a new, two-step procedure to estimate the intrinsic and reflex components of ankle stiffness. In the first step, a discrete-time, subspace-based method is used to estimate a state-space model for overall stiffness from the measured overall torque and then predict the intrinsic and reflex torques. In the second step, continuous-time models for the intrinsic and reflex stiffnesses are estimated from the predicted intrinsic and reflex torques. Simulations and experimental results demonstrate that the algorithm estimates the intrinsic and reflex stiffnesses accurately. The new subspace-based algorithm has three advantages over previous algorithms: 1) It does not require iteration, and therefore, will always converge to an optimal solution; 2) it provides better estimates for data with high noise or short sample lengths; and 3) it provides much more accurate results for data acquired under the closed-loop conditions, that prevail when subjects interact with compliant loads.
A combined joint diagonalization-MUSIC algorithm for subsurface targets localization
NASA Astrophysics Data System (ADS)
Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon
2014-06-01
This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Formulating face verification with semidefinite programming.
Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S
2007-11-01
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
NASA Astrophysics Data System (ADS)
Zhang, Peng; Peng, Jing; Sims, S. Richard F.
2005-05-01
In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
NASA Astrophysics Data System (ADS)
Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.
2015-12-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
2014-01-01
Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
NASA Astrophysics Data System (ADS)
Sarradj, Ennes
2010-04-01
Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
NASA Astrophysics Data System (ADS)
De Filippis, G.; Noël, J. P.; Kerschen, G.; Soria, L.; Stephan, C.
2017-09-01
The introduction of the frequency-domain nonlinear subspace identification (FNSI) method in 2013 constitutes one in a series of recent attempts toward developing a realistic, first-generation framework applicable to complex structures. If this method showed promising capabilities when applied to academic structures, it is still confronted with a number of limitations which needs to be addressed. In particular, the removal of nonphysical poles in the identified nonlinear models is a distinct challenge. In the present paper, it is proposed as a first contribution to operate directly on the identified state-space matrices to carry out spurious pole removal. A modal-space decomposition of the state and output matrices is examined to discriminate genuine from numerical poles, prior to estimating the extended input and feedthrough matrices. The final state-space model thus contains physical information only and naturally leads to nonlinear coefficients free of spurious variations. Besides spurious variations due to nonphysical poles, vibration modes lying outside the frequency band of interest may also produce drifts of the nonlinear coefficients. The second contribution of the paper is to include residual terms, accounting for the existence of these modes. The proposed improved FNSI methodology is validated numerically and experimentally using a full-scale structure, the Morane-Saulnier Paris aircraft.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
Wavelet Analyses of F/A-18 Aeroelastic and Aeroservoelastic Flight Test Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1997-01-01
Time-frequency signal representations combined with subspace identification methods were used to analyze aeroelastic flight data from the F/A-18 Systems Research Aircraft (SRA) and aeroservoelastic data from the F/A-18 High Alpha Research Vehicle (HARV). The F/A-18 SRA data were produced from a wingtip excitation system that generated linear frequency chirps and logarithmic sweeps. HARV data were acquired from digital Schroeder-phased and sinc pulse excitation signals to actuator commands. Nondilated continuous Morlet wavelets implemented as a filter bank were chosen for the time-frequency analysis to eliminate phase distortion as it occurs with sliding window discrete Fourier transform techniques. Wavelet coefficients were filtered to reduce effects of noise and nonlinear distortions identically in all inputs and outputs. Cleaned reconstructed time domain signals were used to compute improved transfer functions. Time and frequency domain subspace identification methods were applied to enhanced reconstructed time domain data and improved transfer functions, respectively. Time domain subspace performed poorly, even with the enhanced data, compared with frequency domain techniques. A frequency domain subspace method is shown to produce better results with the data processed using the Morlet time-frequency technique.
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
NASA Astrophysics Data System (ADS)
Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng
2017-07-01
Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.
Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y
1997-09-01
This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.
Subspace aware recovery of low rank and jointly sparse signals
Biswas, Sampurna; Dasgupta, Soura; Mudumbai, Raghuraman; Jacob, Mathews
2017-01-01
We consider the recovery of a matrix X, which is simultaneously low rank and joint sparse, from few measurements of its columns using a two-step algorithm. Each column of X is measured using a combination of two measurement matrices; one which is the same for every column, while the the second measurement matrix varies from column to column. The recovery proceeds by first estimating the row subspace vectors from the measurements corresponding to the common matrix. The estimated row subspace vectors are then used to recover X from all the measurements using a convex program of joint sparsity minimization. Our main contribution is to provide sufficient conditions on the measurement matrices that guarantee the recovery of such a matrix using the above two-step algorithm. The results demonstrate quite significant savings in number of measurements when compared to the standard multiple measurement vector (MMV) scheme, which assumes same time invariant measurement pattern for all the time frames. We illustrate the impact of the sampling pattern on reconstruction quality using breath held cardiac cine MRI and cardiac perfusion MRI data, while the utility of the algorithm to accelerate the acquisition is demonstrated on MR parameter mapping. PMID:28630889
Automated modal parameter estimation using correlation analysis and bootstrap sampling
NASA Astrophysics Data System (ADS)
Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.
2018-02-01
The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Rogers, L J; Douglas, R R
1984-02-01
In this paper (the second in a series), we consider a (generic) pair of datasets, which have been analyzed by the techniques of the previous paper. Thus, their "stable subspaces" have been established by comparative factor analysis. The pair of datasets must satisfy two confirmable conditions. The first is the "Inclusion Condition," which requires that the stable subspace of one of the datasets is nearly identical to a subspace of the other dataset's stable subspace. On the basis of that, we have assumed the pair to have similar generating signals, with stochastically independent generators. The second verifiable condition is that the (presumed same) generating signals have distinct ratios of variances for the two datasets. Under these conditions a small elaboration of some elementary linear algebra reduces the rotation problem to several eigenvalue-eigenvector problems. Finally, we emphasize that an analysis of each dataset by the method of Douglas and Rogers (1983) is an essential prerequisite for the useful application of the techniques in this paper. Nonempirical methods of estimating the number of factors simply will not suffice, as confirmed by simulations reported in the previous paper.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Minimal Krylov Subspaces for Dimension Reduction
2013-01-01
these applications realized a maximal compute time improvement with minimal Krylov subspaces. More recently, Halko et . al . [36] have investigated... Halko et . al . proposed a variety of them in [36], but we focus on the “direct eigenvalue approximation for Hermitian matri- ces with random...result due to Halko et . al . Theorem 5 ( Halko , Martinsson and Tropp [36]). Let A ∈ Rn×m be the input matrix with partitioned singular value
Improvement in the Accuracy of Matching by Different Feature Subspaces in Traffic Sign Recognition
NASA Astrophysics Data System (ADS)
Ihara, Arihito; Fujiyoshi, Hironobu; Takaki, Masanari; Kumon, Hiroaki; Tamatsu, Yukimasa
A technique for recognizing traffic signs from an image taken with an in-vehicle camera has already been proposed as driver's drive assist. SIFT feature is used for traffic sign recognition, because it is robust to changes in scaling and rotating of the traffic sign. However, it is difficult to process in real-time because the computation cost of the SIFT feature extraction and matching is expensive. This paper presents a method of traffic sign recognition based on keypoint classifier by AdaBoost using PCA-SIFT features in different feature subspaces. Each subspace is constructed from gradients of traffic sign images and general images respectively. A detected keypoint is projected to both subspaces, and then the AdaBoost employs to classy into whether the keypoint is on the traffic sign or not. Experimental results show that the computation cost for keypoint matching can be reduced to about 1/2 compared with the conventional method.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-03-09
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-01-01
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
3D source localization of interictal spikes in epilepsy patients with MRI lesions
NASA Astrophysics Data System (ADS)
Ding, Lei; Worrell, Gregory A.; Lagerlund, Terrence D.; He, Bin
2006-08-01
The present study aims to accurately localize epileptogenic regions which are responsible for epileptic activities in epilepsy patients by means of a new subspace source localization approach, i.e. first principle vectors (FINE), using scalp EEG recordings. Computer simulations were first performed to assess source localization accuracy of FINE in the clinical electrode set-up. The source localization results from FINE were compared with the results from a classic subspace source localization approach, i.e. MUSIC, and their differences were tested statistically using the paired t-test. Other factors influencing the source localization accuracy were assessed statistically by ANOVA. The interictal epileptiform spike data from three adult epilepsy patients with medically intractable partial epilepsy and well-defined symptomatic MRI lesions were then studied using both FINE and MUSIC. The comparison between the electrical sources estimated by the subspace source localization approaches and MRI lesions was made through the coregistration between the EEG recordings and MRI scans. The accuracy of estimations made by FINE and MUSIC was also evaluated and compared by R2 statistic, which was used to indicate the goodness-of-fit of the estimated sources to the scalp EEG recordings. The three-concentric-spheres head volume conductor model was built for each patient with three spheres of different radii which takes the individual head size and skull thickness into consideration. The results from computer simulations indicate that the improvement of source spatial resolvability and localization accuracy of FINE as compared with MUSIC is significant when simulated sources are closely spaced, deep, or signal-to-noise ratio is low in a clinical electrode set-up. The interictal electrical generators estimated by FINE and MUSIC are in concordance with the patients' structural abnormality, i.e. MRI lesions, in all three patients. The higher R2 values achieved by FINE than MUSIC indicate that FINE provides a more satisfactory fitting of the scalp potential measurements than MUSIC in all patients. The present results suggest that FINE provides a useful brain source imaging technique, from clinical EEG recordings, for identifying and localizing epileptogenic regions in epilepsy patients with focal partial seizures. The present study may lead to the establishment of a high-resolution source localization technique from scalp-recorded EEGs for aiding presurgical planning in epilepsy patients.
Current harmonics elimination control method for six-phase PM synchronous motor drives.
Yuan, Lei; Chen, Ming-liang; Shen, Jian-qing; Xiao, Fei
2015-11-01
To reduce the undesired 5th and 7th stator harmonic current in the six-phase permanent magnet synchronous motor (PMSM), an improved vector control algorithm was proposed based on vector space decomposition (VSD) transformation method, which can control the fundamental and harmonic subspace separately. To improve the traditional VSD technology, a novel synchronous rotating coordinate transformation matrix was presented in this paper, and only using the traditional PI controller in d-q subspace can meet the non-static difference adjustment, the controller parameter design method is given by employing internal model principle. Moreover, the current PI controller parallel with resonant controller is employed in x-y subspace to realize the specific 5th and 7th harmonic component compensation. In addition, a new six-phase SVPWM algorithm based on VSD transformation theory is also proposed. Simulation and experimental results verify the effectiveness of current decoupling vector controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
NASA Astrophysics Data System (ADS)
Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.
2018-05-01
An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.
SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING
Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin
2018-01-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594
Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.
Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-07-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
Emad, Amin; Milenkovic, Olgica
2014-01-01
We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN (Causal Subspace Pursuit for Inference and Analysis of Networks), which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a sequential list-version of the subspace pursuit reconstruction algorithm and to estimate the direction of gene interactions via Granger-type elimination. The method is conceptually simple and computationally efficient, and it allows for dealing with noisy measurements. Its performance as a stand-alone platform without biological side-information was tested on simulated networks, on the synthetic IRMA network in Saccharomyces cerevisiae, and on data pertaining to the human HeLa cell network and the SOS network in E. coli. The results produced by CaSPIAN are compared to the results of several related algorithms, demonstrating significant improvements in inference accuracy of documented interactions. These findings highlight the importance of Granger causality techniques for reducing the number of false-positives, as well as the influence of noise and sampling period on the accuracy of the estimates. In addition, the performance of the method was tested in conjunction with biological side information of the form of sparse “scaffold networks”, to which new edges were added using available RNA-seq or microarray data. These biological priors aid in increasing the sensitivity and precision of the algorithm in the small sample regime. PMID:24622336
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Updating Hawaii Seismicity Catalogs with Systematic Relocations and Subspace Detectors
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Matoza, R. S.; Thelen, W. A.
2015-12-01
We continue the systematic relocation of seismicity recorded in Hawai`i by the United States Geological Survey's (USGS) Hawaiian Volcano Observatory (HVO), with interests in adding to the products derived from the relocated seismicity catalogs published by Matoza et al., (2013, 2014). Another goal of this effort is updating the systematically relocated HVO catalog since 2009, when earthquake cataloging at HVO was migrated to the USGS Advanced National Seismic System Quake Management Software (AQMS) systems. To complement the relocation analyses of the catalogs generated from traditional STA/LTA event-triggered and analyst-reviewed approaches, we are also experimenting with subspace detection of events at Kilauea as a means to augment AQMS procedures for cataloging seismicity to lower magnitudes and during episodes of elevated volcanic activity. Our earlier catalog relocations have demonstrated the ability to define correlated or repeating families of earthquakes and provide more detailed definition of seismogenic structures, as well as the capability for improved automatic identification of diverse volcanic seismic sources. Subspace detectors have been successfully applied to cataloging seismicity in situations of low seismic signal-to-noise and have significantly increased catalog sensitivity to lower magnitude thresholds. We anticipate similar improvements using event subspace detections and cataloging of volcanic seismicity that include improved discrimination among not only evolving earthquake sequences but also diverse volcanic seismic source processes. Matoza et al., 2013, Systematic relocation of seismicity on Hawai`i Island from 1992 to 2009 using waveform cross correlation and cluster analysis, J. Geophys. Res., 118, 2275-2288, doi:10.1002/jgrb.580189 Matoza et al., 2014, High-precision relocation of long-period events beneath the summit region of Kīlauea Volcano, Hawai`i, from 1986 to 2009, Geophys. Res. Lett., 41, 3413-3421, doi:10.1002/2014GL059819
Development of Parallel Architectures for Sensor Array Processing. Volume 1
1993-08-01
required for the DOA estimation [ 1-7]. The Multiple Signal Classification ( MUSIC ) [ 1] and the Estimation of Signal Parameters by Rotational...manifold and the estimated subspace. Although MUSIC is a high resolution algorithm, it has several drawbacks including the fact that complete knowledge of...thoroughly, MUSIC algorithm was selected to develop special purpose hardware for real time computation. Summary of the MUSIC algorithm is as follows
Subspace-based interference removal methods for a multichannel biomagnetic sensor array.
Sekihara, Kensuke; Nagarajan, Srikantan S
2017-10-01
In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Subspace-based interference removal methods for a multichannel biomagnetic sensor array
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Nagarajan, Srikantan S.
2017-10-01
Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Indoor Subspacing to Implement Indoorgml for Indoor Navigation
NASA Astrophysics Data System (ADS)
Jung, H.; Lee, J.
2015-10-01
According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
NASA Technical Reports Server (NTRS)
Erickson, Gary E.
2010-01-01
Response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration at supersonic speeds in the NASA LaRC Unitary Plan Wind Tunnel. The Mach 3 staging was dominated by shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. The inference space was partitioned into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using central composite designs capable of fitting full second-order response functions. The underlying aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle were estimated using piecewise-continuous lower-order polynomial functions. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. Augmenting the central composite designs to full third-order using computer-generated D-optimality criteria was evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting lower-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
Stochastic subspace identification for operational modal analysis of an arch bridge
NASA Astrophysics Data System (ADS)
Loh, Chin-Hsiung; Chen, Ming-Che; Chao, Shu-Hsien
2012-04-01
In this paer the application of output-only system identification technique, known as Stochastic Subspace Identification (SSI) algorithms, for civil infrastructures is carried out. The ability of covariance driven stochastic subspace identification (SSI-COV) was proved through the analysis of the ambient data of an arch bridge under operational condition. A newly developed signal processing technique, Singular Spectrum analysis (SSA), capable to smooth noisy signals, is adopted for pre-processing the recorded data before the SSI. The conjunction of SSA and SSICOV provides a useful criterion for the system order determination. With the aim of estimating accurate modal parameters of the structure in off-line analysis, a stabilization diagram is constructed by plotting the identified poles of the system with increasing the size of data Hankel matrix. Identification task of a real structure, Guandu Bridge, is carried out to identify the system natural frequencies and mode shapes. The uncertainty of the identified model parameters from output-only measurement of the bridge under operation condition, such as temperature and traffic loading conditions, is discussed.
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Maes, Kristof; Lombaert, Geert; De Roeck, Guido
2016-01-01
Identified modal characteristics are often used as a basis for the calibration and validation of dynamic structural models, for structural control, for structural health monitoring, etc. It is therefore important to know their accuracy. In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures. The first structure is a damaged prestressed concrete bridge for which acceleration and dynamic strain data were measured in 36 different setups. The second structure is a mid-rise building for which acceleration data were measured in 10 different setups. There is a good quantitative agreement between the predicted levels of uncertainty and the observed variability of the eigenfrequencies and damping ratios between the different setups. The method can therefore be used with confidence for quantifying the uncertainty of the identified modal characteristics, also when some or all of them are estimated from a single batch of vibration data. Furthermore, the method is seen to yield valuable insight in the variability of the estimation accuracy from mode to mode and from setup to setup: the more informative a setup is regarding an estimated modal characteristic, the smaller is the estimated variance.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2015-02-01
Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.
Learning Robust and Discriminative Subspace With Low-Rank Constraints.
Li, Sheng; Fu, Yun
2016-11-01
In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred
2015-04-01
The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.
Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.
Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei
2016-02-01
To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Wolf, Antje; Kirschner, Karl N
2013-02-01
With improvements in computer speed and algorithm efficiency, MD simulations are sampling larger amounts of molecular and biomolecular conformations. Being able to qualitatively and quantitatively sift these conformations into meaningful groups is a difficult and important task, especially when considering the structure-activity paradigm. Here we present a study that combines two popular techniques, principal component (PC) analysis and clustering, for revealing major conformational changes that occur in molecular dynamics (MD) simulations. Specifically, we explored how clustering different PC subspaces effects the resulting clusters versus clustering the complete trajectory data. As a case example, we used the trajectory data from an explicitly solvated simulation of a bacteria's L11·23S ribosomal subdomain, which is a target of thiopeptide antibiotics. Clustering was performed, using K-means and average-linkage algorithms, on data involving the first two to the first five PC subspace dimensions. For the average-linkage algorithm we found that data-point membership, cluster shape, and cluster size depended on the selected PC subspace data. In contrast, K-means provided very consistent results regardless of the selected subspace. Since we present results on a single model system, generalization concerning the clustering of different PC subspaces of other molecular systems is currently premature. However, our hope is that this study illustrates a) the complexities in selecting the appropriate clustering algorithm, b) the complexities in interpreting and validating their results, and c) by combining PC analysis with subsequent clustering valuable dynamic and conformational information can be obtained.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Terao, Takamichi
2010-08-01
We propose a numerical method to calculate interior eigenvalues and corresponding eigenvectors for nonsymmetric matrices. Based on the subspace projection technique onto expanded Ritz subspace, it becomes possible to obtain eigenvalues and eigenvectors with sufficiently high precision. This method overcomes the difficulties of the traditional nonsymmetric Lanczos algorithm, and improves the accuracy of the obtained interior eigenvalues and eigenvectors. Using this algorithm, we investigate three-dimensional metamaterial composites consisting of positive and negative refractive index materials, and it is demonstrated that the finite-difference frequency-domain algorithm is applicable to analyze these metamaterial composites.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Xu, Yilei; Roy-Chowdhury, Amit K
2007-05-01
In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.
ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.
Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi
2009-06-15
In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.
van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L
2011-02-07
Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Improving M-SBL for Joint Sparse Recovery Using a Subspace Penalty
NASA Astrophysics Data System (ADS)
Ye, Jong Chul; Kim, Jong Min; Bresler, Yoram
2015-12-01
The multiple measurement vector problem (MMV) is a generalization of the compressed sensing problem that addresses the recovery of a set of jointly sparse signal vectors. One of the important contributions of this paper is to reveal that the seemingly least related state-of-art MMV joint sparse recovery algorithms - M-SBL (multiple sparse Bayesian learning) and subspace-based hybrid greedy algorithms - have a very important link. More specifically, we show that replacing the $\\log\\det(\\cdot)$ term in M-SBL by a rank proxy that exploits the spark reduction property discovered in subspace-based joint sparse recovery algorithms, provides significant improvements. In particular, if we use the Schatten-$p$ quasi-norm as the corresponding rank proxy, the global minimiser of the proposed algorithm becomes identical to the true solution as $p \\rightarrow 0$. Furthermore, under the same regularity conditions, we show that the convergence to a local minimiser is guaranteed using an alternating minimization algorithm that has closed form expressions for each of the minimization steps, which are convex. Numerical simulations under a variety of scenarios in terms of SNR, and condition number of the signal amplitude matrix demonstrate that the proposed algorithm consistently outperforms M-SBL and other state-of-the art algorithms.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Low-Cost 3-D Flow Estimation of Blood With Clutter.
Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali
2017-05-01
Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
Nonadiabatic holonomic quantum computation in decoherence-free subspaces.
Xu, G F; Zhang, J; Tong, D M; Sjöqvist, Erik; Kwek, L C
2012-10-26
Quantum computation that combines the coherence stabilization virtues of decoherence-free subspaces and the fault tolerance of geometric holonomic control is of great practical importance. Some schemes of adiabatic holonomic quantum computation in decoherence-free subspaces have been proposed in the past few years. However, nonadiabatic holonomic quantum computation in decoherence-free subspaces, which avoids a long run-time requirement but with all the robust advantages, remains an open problem. Here, we demonstrate how to realize nonadiabatic holonomic quantum computation in decoherence-free subspaces. By using only three neighboring physical qubits undergoing collective dephasing to encode one logical qubit, we realize a universal set of quantum gates.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Hypercyclic subspaces for Frechet space operators
NASA Astrophysics Data System (ADS)
Petersson, Henrik
2006-07-01
A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Application of Subspace Detection to the 6 November 2011 M5.6 Prague, Oklahoma Aftershock Sequence
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Johnson, C. E.; Aster, R. C.; McNamara, D. E.
2015-12-01
Subspace detection is a powerful tool for the identification of small seismic events. Subspace detectors improve upon single-event matched filtering techniques by using multiple orthogonal waveform templates whose linear combinations characterize a range of observed signals from previously identified earthquakes. Subspace detectors running on multiple stations can significantly increasing the number of locatable events, lowering the catalog's magnitude of completeness and thus providing extraordinary detail on the kinematics of the aftershock process. The 6 November 2011 M5.6 earthquake near Prague, Oklahoma is the largest earthquake instrumentally recorded in Oklahoma history and the largest earthquake resultant from deep wastewater injection. A M4.8 foreshock on 5 November 2011 and the M5.6 mainshock triggered tens of thousands of detectable aftershocks along a 20 km splay of the Wilzetta Fault Zone known as the Meeker-Prague fault. In response to this unprecedented earthquake, 21 temporary seismic stations were deployed surrounding the seismic activity. We utilized a catalog of 767 previously located aftershocks to construct subspace detectors for the 21 temporary and 10 closest permanent seismic stations. Subspace detection identified more than 500,000 new arrival-time observations, which associated into more than 20,000 locatable earthquakes. The associated earthquakes were relocated using the Bayesloc multiple-event locator, resulting in ~7,000 earthquakes with hypocentral uncertainties of less than 500 m. The relocated seismicity provides unique insight into the spatio-temporal evolution of the aftershock sequence along the Wilzetta Fault Zone and its associated structures. We find that the crystalline basement and overlying sedimentary Arbuckle formation accommodate the majority of aftershocks. While we observe aftershocks along the entire 20 km length of the Meeker-Prague fault, the vast majority of earthquakes were confined to a 9 km wide by 9 km deep surface striking N54°E and dipping 83° to the northwest near the junction of the splay with the main Wilzetta fault structure. Relocated seismicity shows off-fault stress-related interaction to distances of 10 km or more from the mainshock, including clustered seismicity to the northwest and southeast of the mainshock.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla
2013-12-01
To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Improving deep convolutional neural networks with mixed maxout units.
Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
Scalable Robust Principal Component Analysis Using Grassmann Averages.
Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J
2016-11-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
Improved neutron-gamma discrimination for a 3He neutron detector using subspace learning methods
Wang, C. L.; Funk, L. L.; Riedel, R. A.; ...
2017-02-10
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Beyond union of subspaces: Subspace pursuit on Grassmann manifold for data representation
Shen, Xinyue; Krim, Hamid; Gu, Yuantao
2016-03-01
Discovering the underlying structure of a high-dimensional signal or big data has always been a challenging topic, and has become harder to tackle especially when the observations are exposed to arbitrary sparse perturbations. Here in this paper, built on the model of a union of subspaces (UoS) with sparse outliers and inspired by a basis pursuit strategy, we exploit the fundamental structure of a Grassmann manifold, and propose a new technique of pursuing the subspaces systematically by solving a non-convex optimization problem using the alternating direction method of multipliers. This problem as noted is further complicated by non-convex constraints onmore » the Grassmann manifold, as well as the bilinearity in the penalty caused by the subspace bases and coefficients. Nevertheless, numerical experiments verify that the proposed algorithm, which provides elegant solutions to the sub-problems in each step, is able to de-couple the subspaces and pursue each of them under time-efficient parallel computation.« less
NASA Astrophysics Data System (ADS)
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students
ERIC Educational Resources Information Center
Nugent, Rebecca; Ayers, Elizabeth; Dean, Nema
2009-01-01
In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach:…
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.
Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J
2018-02-15
Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.
Liu, Jing; Huang, Kaiyu; Zhang, Guoxian
2017-04-20
We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
NASA Astrophysics Data System (ADS)
Yaremchuk, Max; Martin, Paul; Beattie, Christopher
2017-09-01
Development and maintenance of the linearized and adjoint code for advanced circulation models is a challenging issue, requiring a significant proportion of total effort in operational data assimilation (DA). The ensemble-based DA techniques provide a derivative-free alternative, which appears to be competitive with variational methods in many practical applications. This article proposes a hybrid scheme for generating the search subspaces in the adjoint-free 4-dimensional DA method (a4dVar) that does not use a predefined ensemble. The method resembles 4dVar in that the optimal solution is strongly constrained by model dynamics and search directions are supplied iteratively using information from the current and previous model trajectories generated in the process of optimization. In contrast to 4dVar, which produces a single search direction from exact gradient information, a4dVar employs an ensemble of directions to form a subspace in order to proceed. In the earlier versions of a4dVar, search subspaces were built using the leading EOFs of either the model trajectory or the projections of the model-data misfits onto the range of the background error covariance (BEC) matrix at the current iteration. In the present study, we blend both approaches and explore a hybrid scheme of ensemble generation in order to improve the performance and flexibility of the algorithm. In addition, we introduce balance constraints into the BEC structure and periodically augment the search ensemble with BEC eigenvectors to avoid repeating minimization over already explored subspaces. Performance of the proposed hybrid a4dVar (ha4dVar) method is compared with that of standard 4dVar in a realistic regional configuration assimilating real data into the Navy Coastal Ocean Model (NCOM). It is shown that the ha4dVar converges faster than a4dVar and can be potentially competitive with 4dvar both in terms of the required computational time and the forecast skill.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Active Subspaces for Wind Plant Surrogate Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N; Quick, Julian; Dykes, Katherine L
Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Normal forms for reduced stochastic climate models
Majda, Andrew J.; Franzke, Christian; Crommelin, Daan
2009-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943
Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP
NASA Technical Reports Server (NTRS)
Gupta, V. K.; Zillmer, S. D.; Allison, R. E.
1986-01-01
The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
A Model Comparison for Characterizing Protein Motions from Structure
NASA Astrophysics Data System (ADS)
David, Charles; Jacobs, Donald
2011-10-01
A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics.
Zeno subspace in quantum-walk dynamics
NASA Astrophysics Data System (ADS)
Chandrashekar, C. M.
2010-11-01
We investigate discrete-time quantum-walk evolution under the influence of periodic measurements in position subspace. The undisturbed survival probability of the particle at the position subspace P(0,t) is compared with the survival probability after frequent (n) measurements at interval τ=t/n, P(0,τ)n. We show that P(0,τ)n>P(0,t) leads to the quantum Zeno effect in position subspace when a parameter θ in the quantum coin operations and frequency of measurements is greater than the critical value, θ>θc and n>nc. This Zeno effect in the subspace preserves the dynamics in coin Hilbert space of the walk dynamics and has the potential to play a significant role in quantum tasks such as preserving the quantum state of the particle at any particular position, and to understand the Zeno dynamics in a multidimensional system that is highly transient in nature.
Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard
2015-01-01
The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios. PMID:26016909
Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard
2015-05-26
The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios.
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
Active Subspaces of Airfoil Shape Parameterizations
NASA Astrophysics Data System (ADS)
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
Adiabatic evolution of decoherence-free subspaces and its shortcuts
NASA Astrophysics Data System (ADS)
Wu, S. L.; Huang, X. L.; Li, H.; Yi, X. X.
2017-10-01
The adiabatic theorem and shortcuts to adiabaticity for time-dependent open quantum systems are explored in this paper. Starting from the definition of dynamical stable decoherence-free subspace, we show that, under a compact adiabatic condition, the quantum state remains in the time-dependent decoherence-free subspace with an extremely high purity, even though the dynamics of the open quantum system may not be adiabatic. The adiabatic condition mentioned here in the adiabatic theorem for open systems is very similar to that for closed quantum systems, except that the operators required to change slowly are the Lindblad operators. We also show that the adiabatic evolution of decoherence-free subspaces depends on the existence of instantaneous decoherence-free subspaces, which requires that the Hamiltonian of open quantum systems be engineered according to the incoherent control protocol. In addition, shortcuts to adiabaticity for adiabatic decoherence-free subspaces are also presented based on the transitionless quantum driving method. Finally, we provide an example that consists of a two-level system coupled to a broadband squeezed vacuum field to show our theory. Our approach employs Markovian master equations and the theory can apply to finite-dimensional quantum open systems.
Improving deep convolutional neural networks with mixed maxout units
Liu, Fu-xian; Li, Long-yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Nadeau-Beaulieu, Michel
In this thesis, three mathematical models are built from flight test data for different aircraft design applications: a ground dynamics model for the Bell 427 helicopter, a prediction model for the rotor and engine parameters for the same helicopter type and a simulation model for the aeroelastic deflections of the F/A-18. In the ground dynamics application, the model structure is derived from physics where the normal force between the helicopter and the ground is modelled as a vertical spring and the frictional force is modelled with static and dynamic friction coefficients. The ground dynamics model coefficients are optimized to ensure that the model matches the landing data within the FAA (Federal Aviation Administration) tolerance bands for a level D flight simulator. In the rotor and engine application, rotors torques (main and tail), the engine torque and main rotor speed are estimated using a state-space model. The model inputs are nonlinear terms derived from the pilot control inputs and the helicopter states. The model parameters are identified using the subspace method and are further optimised with the Levenberg-Marquardt minimisation algorithm. The model built with the subspace method provides an excellent estimate of the outputs within the FAA tolerance bands. The F/A-18 aeroelastic state-space model is built from flight test. The research concerning this model is divided in two parts. Firstly, the deflection of a given structural surface on the aircraft following a differential ailerons control input is represented by a Multiple Inputs Single Outputs linear model whose inputs are the ailerons positions and the structural surfaces deflections. Secondly, a single state-space model is used to represent the deflection of the aircraft wings and trailing edge flaps following any control input. In this case the model is made non-linear by multiplying model inputs into higher order terms and using these terms as the inputs of the state-space equations. In both cases, the identification method is the subspace method. Most fit coefficients between the estimated and the measured signals are above 73% and most correlation coefficient are higher than 90%.
Pigments identification of paintings using subspace distance unmixing algorithm
NASA Astrophysics Data System (ADS)
Li, Bin; Lyu, Shuqiang; Zhang, Dafeng; Dong, Qinghao
2018-04-01
In the digital protection of the cultural relics, the identification of the pigment mixtures on the surface of the painting has been the research spot for many years. In this paper, as a hyperspectral unmixing algorithm, sub-space distance unmixing is introduced to solve the problem of recognition of pigments mixture in paintings. Firstly, some mixtures of different pigments are designed to measure their reflectance spectra using spectrometer. Moreover, the factors affecting the unmixing accuracy of pigments' mixtures are discussed. The unmixing results of two cases with and without rice paper and its underlay as endmembers are compared. The experiment results show that the algorithm is able to unmixing the pigments effectively and the unmixing accuracy can be improved after considering the influence of spectra of the rich paper and the underlaying material.
A new method to real-normalize measured complex modes
NASA Technical Reports Server (NTRS)
Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.
1987-01-01
A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
NASA Astrophysics Data System (ADS)
Suzuki, Akito
2008-04-01
We study a model of the quantized electromagnetic field interacting with an external static source ρ in the Feynman (Lorentz) gauge and construct the quantized radiation field Aμ (μ=0,1,2,3) as an operator-valued distribution acting on the Fock space F with an indefinite metric. By using the Gupta subsidiary condition ∂μAμ(x)(+)Ψ=0, one can select the physical subspace Vphys. According to the Gupta-Bleuler formalism, Vphys is a non-negative subspace so that elements of Vphys, called physical states, can be probabilistically interpretable. Indeed, assuming that the external source ρ is infrared regular, i.e., ρ̂/∣k∣3/2ɛL2(R3), we can characterize the physical subspace Vphys and show that Vphys is non-negative. In addition, we find that the Hamiltonian of the model is reduced to the Hamiltonian of the transverse photons with the Coulomb interaction. We, however, prove that the physical subspace is trivial, i.e., Vphys={0}, if and only if the external source ρ is infrared singular, i.e., ρ̂/∣k∣3/2∉L2(R3). We also discuss a representation different from the above representation such that the physical subspace is not trivial under the infrared singular condition.
Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M
2017-07-11
Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.
2017-09-27
ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...originator. ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...unlimited. October 2015–January 2016 US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005-5066 primary author’s email
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence
2017-01-01
Background: Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods: Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results: Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space (p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions: Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization. PMID:28375173
Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence
2017-04-04
Background : Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods : Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results : Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space ( p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions : Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.; Deloach, Richard
2008-01-01
A collection of statistical and mathematical techniques referred to as response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration using data obtained on small-scale models at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. The simulated Mach 3 staging was dominated by multiple shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. This motivated a partitioning of the overall inference space into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using cuboidal and spherical central composite designs capable of fitting full second-order response functions. The primary goal was to approximate the underlying overall aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle using relatively simple, lower-order polynomial functions that were piecewise-continuous across the full independent variable ranges of interest. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. The potential benefits of augmenting the central composite designs to full third order using computer-generated D-optimality criteria were also evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting low-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.
Ambient Vibration Testing for Story Stiffness Estimation of a Heritage Timber Building
Min, Kyung-Won; Kim, Junhee; Park, Sung-Ah; Park, Chan-Soo
2013-01-01
This paper investigates dynamic characteristics of a historic wooden structure by ambient vibration testing, presenting a novel estimation methodology of story stiffness for the purpose of vibration-based structural health monitoring. As for the ambient vibration testing, measured structural responses are analyzed by two output-only system identification methods (i.e., frequency domain decomposition and stochastic subspace identification) to estimate modal parameters. The proposed methodology of story stiffness is estimation based on an eigenvalue problem derived from a vibratory rigid body model. Using the identified natural frequencies, the eigenvalue problem is efficiently solved and uniquely yields story stiffness. It is noteworthy that application of the proposed methodology is not necessarily confined to the wooden structure exampled in the paper. PMID:24227999
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
Subspace Methods for Massive and Messy Data
2017-07-12
Subspace Methods for Massive and Messy Data The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 REPORT DOCUMENTATION PAGE 11. SPONSOR...Number: W911NF-14-1-0634 Organization: University of Michigan - Ann Arbor Title: Subspace Methods for Massive and Messy Data Report Term: 0-Other
Geometry aware Stationary Subspace Analysis
2016-11-22
approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface ( BCI ...context of brain computer interface ( BCI ) data analysis, two such note-worthy methods are stationary subspace analysis (SSA) (von Bünau et al., 2009a... BCI systems, is sCSP. Its goal is to project the data onto a subspace in which the various data classes are more separable. The sCSP method directs
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Entanglement dynamics of coupled qubits and a semi-decoherence free subspace
NASA Astrophysics Data System (ADS)
Campagnano, Gabriele; Hamma, Alioscia; Weiss, Ulrich
2010-01-01
We study the entanglement dynamics and relaxation properties of a system of two interacting qubits in the cases of (I) two independent bosonic baths and (II) one common bath. We find that in the case (II) the existence of a decoherence-free subspace (DFS) makes entanglement dynamics very rich. We show that when the system is initially in a state with a component in the DFS the relaxation time is surprisingly long, showing the existence of semi-decoherence free subspaces.
NASA Astrophysics Data System (ADS)
Zhu, L.; Li, Z.; Li, C.; Wang, B.; Chen, Z.; McClellan, J. H.; Peng, Z.
2017-12-01
Spatial-temporal evolution of aftershocks is important for illumination of earthquake physics and for rapid response of devastative earthquakes. To improve aftershock catalogs of the 2008 MW7.9 Wenchuan earthquake in Sichuan, China, Alibaba cloud and China Earthquake Administration jointly launched a seismological contest in May 2017 [Fang et al., 2017]. This abstract describes how we handle this problem in this competition. We first used Short-Term Average/Long-Term Average (STA/LTA) and Kurtosis function to obtain over 55000 candidate phase picks (P or S). Based on Signal to Noise Ratio (SNR), about 40000 phases (P or S) are selected. So far, these 40000 phases have a hit rate of 40% among the manually picks. The causes include that 1) there exist false picks (neither P nor S); 2) some P and S arrivals are mis-labeled. To improve our results, we correlate the 40000 phases over continuous waveforms to obtain the phases missed by during the first pass. This results in 120,000 events. After constructing an affinity matrix based on the cross-correlation for newly detected phases, subspace clustering methods [Vidal 2011] are applied to group those phases into separated subspaces. Initial results show good agreement between empirical and clustered labels of P phases. Half of the empirical S phases are clustered into the P phase cluster. This may be a combined effect of 1) mislabeling isolated P phases to S phases and 2) clustering errors due to a small incomplete sample pool. Phases that were falsely detected in the initial results can be also teased out. To better characterize P and S phases, our next step is to apply subspace clustering methods directly to the waveforms, instead of using the cross-correlation coefficients of detected phases. After that, supervised learning, e.g., a convolutional neural network, can be employed to improve the pick accuracy. Updated results will be presented at the meeting.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
NASA Astrophysics Data System (ADS)
Roy, Satadru
Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Wang, Guiming; Hobbs, N Thompson; Galbraith, Hector; Giesen, Kenneth M
2002-09-01
Global climate change may impact wildlife populations by affecting local weather patterns, which, in turn, can impact a variety of ecological processes. However, it is not clear that local variations in ecological processes can be explained by large-scale patterns of climate. The North Atlantic oscillation (NAO) is a large-scale climate phenomenon that has been shown to influence the population dynamics of some animals. Although effects of the NAO on vertebrate population dynamics have been studied, it remains uncertain whether it broadly predicts the impact of weather on species. We examined the ability of local weather data and the NAO to explain the annual variation in population dynamics of white-tailed ptarmigan ( Lagopus leucurus) in Rocky Mountain National Park, USA. We performed canonical correlation analysis on the demographic subspace of ptarmigan and local-climate subspace defined by the empirical orthogonal function (EOF) using data from 1975 to 1999. We found that two subspaces were significantly correlated on the first canonical variable. The Pearson correlation coefficient of the first EOF values of the demographic and local-climate subspaces was significant. The population density and the first EOF of local-climate subspace influenced the ptarmigan population with 1-year lags in the Gompertz model. However, the NAO index was neither related to the first two EOF of local-climate subspace nor to the first EOF of the demographic subspace of ptarmigan. Moreover, the NAO index was not a significant term in the Gompertz model for the ptarmigan population. Therefore, local climate had stronger signature on the demography of ptarmigan than did a large-scale index, i.e., the NAO index. We conclude that local responses of wildlife populations to changing climate may not be adequately explained by models that project large-scale climatic patterns.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Detecting coupled collective motions in protein by independent subspace analysis
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Joti, Yasumasa; Kitao, Akio
2010-11-01
Protein dynamics evolves in a high-dimensional space, comprising aharmonic, strongly correlated motional modes. Such correlation often plays an important role in analyzing protein function. In order to identify significantly correlated collective motions, here we employ independent subspace analysis based on the subspace joint approximate diagonalization of eigenmatrices algorithm for the analysis of molecular dynamics (MD) simulation trajectories. From the 100 ns MD simulation of T4 lysozyme, we extract several independent subspaces in each of which collective modes are significantly correlated, and identify the other modes as independent. This method successfully detects the modes along which long-tailed non-Gaussian probability distributions are obtained. Based on the time cross-correlation analysis, we identified a series of events among domain motions and more localized motions in the protein, indicating the connection between the functionally relevant phenomena which have been independently revealed by experiments.
Independence and totalness of subspaces in phase space methods
NASA Astrophysics Data System (ADS)
Vourdas, A.
2018-04-01
The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.
Measuring glomerular number from kidney MRI images
NASA Astrophysics Data System (ADS)
Thiagarajan, Jayaraman J.; Natesan Ramamurthy, Karthikeyan; Kanberoglu, Berkay; Frakes, David; Bennett, Kevin; Spanias, Andreas
2016-03-01
Measuring the glomerular number in the entire, intact kidney using non-destructive techniques is of immense importance in studying several renal and systemic diseases. Commonly used approaches either require destruction of the entire kidney or perform extrapolation from measurements obtained from a few isolated sections. A recent magnetic resonance imaging (MRI) method, based on the injection of a contrast agent (cationic ferritin), has been used to effectively identify glomerular regions in the kidney. In this work, we propose a robust, accurate, and low-complexity method for estimating the number of glomeruli from such kidney MRI images. The proposed technique has a training phase and a low-complexity testing phase. In the training phase, organ segmentation is performed on a few expert-marked training images, and glomerular and non-glomerular image patches are extracted. Using non-local sparse coding to compute similarity and dissimilarity graphs between the patches, the subspace in which the glomerular regions can be discriminated from the rest are estimated. For novel test images, the image patches extracted after pre-processing are embedded using the discriminative subspace projections. The testing phase is of low computational complexity since it involves only matrix multiplications, clustering, and simple morphological operations. Preliminary results with MRI data obtained from five kidneys of rats show that the proposed non-invasive, low-complexity approach performs comparably to conventional approaches such as acid maceration and stereology.
Target detection using the background model from the topological anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.
2013-05-01
The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising
Guo, Muran; Chen, Tao; Wang, Ben
2017-01-01
Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
Network-based de-noising improves prediction from microarray data.
Kato, Tsuyoshi; Murata, Yukio; Miura, Koh; Asai, Kiyoshi; Horton, Paul B; Koji, Tsuda; Fujibuchi, Wataru
2006-03-20
Prediction of human cell response to anti-cancer drugs (compounds) from microarray data is a challenging problem, due to the noise properties of microarrays as well as the high variance of living cell responses to drugs. Hence there is a strong need for more practical and robust methods than standard methods for real-value prediction. We devised an extended version of the off-subspace noise-reduction (de-noising) method to incorporate heterogeneous network data such as sequence similarity or protein-protein interactions into a single framework. Using that method, we first de-noise the gene expression data for training and test data and also the drug-response data for training data. Then we predict the unknown responses of each drug from the de-noised input data. For ascertaining whether de-noising improves prediction or not, we carry out 12-fold cross-validation for assessment of the prediction performance. We use the Pearson's correlation coefficient between the true and predicted response values as the prediction performance. De-noising improves the prediction performance for 65% of drugs. Furthermore, we found that this noise reduction method is robust and effective even when a large amount of artificial noise is added to the input data. We found that our extended off-subspace noise-reduction method combining heterogeneous biological data is successful and quite useful to improve prediction of human cell cancer drug responses from microarray data.
Balasubramanian, Madhusudhanan; Žabić, Stanislav; Bowd, Christopher; Thompson, Hilary W.; Wolenski, Peter; Iyengar, S. Sitharama; Karki, Bijaya B.; Zangwill, Linda M.
2009-01-01
Glaucoma is the second leading cause of blindness worldwide. Often the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this work, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L1 and L2 norms, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUC) were used to compare the diagnostic performance of the POD induced parameters with the parameters of Topographic Change Analysis (TCA) method. The IMED and L2 norm parameters in the POD framework provided the highest AUC of 0.94 at 10° field of imaging and 0.91 at 15° field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88 respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management. PMID:19369163
Remote sensing of phytoplankton chlorophyll-a concentration by use of ridge function fields.
Pelletier, Bruno; Frouin, Robert
2006-02-01
A methodology is presented for retrieving phytoplankton chlorophyll-a concentration from space. The data to be inverted, namely, vectors of top-of-atmosphere reflectance in the solar spectrum, are treated as explanatory variables conditioned by angular geometry. This approach leads to a continuum of inverse problems, i.e., a collection of similar inverse problems continuously indexed by the angular variables. The resolution of the continuum of inverse problems is studied from the least-squares viewpoint and yields a solution expressed as a function field over the set of permitted values for the angular variables, i.e., a map defined on that set and valued in a subspace of a function space. The function fields of interest, for reasons of approximation theory, are those valued in nested sequences of subspaces, such as ridge function approximation spaces, the union of which is dense. Ridge function fields constructed on synthetic yet realistic data for case I waters handle well situations of both weakly and strongly absorbing aerosols, and they are robust to noise, showing improvement in accuracy compared with classic inversion techniques. The methodology is applied to actual imagery from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS); noise in the data are taken into account. The chlorophyll-a concentration obtained with the function field methodology differs from that obtained by use of the standard SeaWiFS algorithm by 15.7% on average. The results empirically validate the underlying hypothesis that the inversion is solved in a least-squares sense. They also show that large levels of noise can be managed if the noise distribution is known or estimated.
NASA Astrophysics Data System (ADS)
Shokravi, H.; Bakhary, NH
2017-11-01
Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
Gravitational instantons, self-duality, and geometric flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourliot, F.; Estes, J.; Petropoulos, P. M.
2010-05-15
We discuss four-dimensional 'spatially homogeneous' gravitational instantons. These are self-dual solutions of Euclidean vacuum Einstein equations. They are endowed with a product structure RxM{sub 3} leading to a foliation into three-dimensional subspaces evolving in Euclidean time. For a large class of homogeneous subspaces, the dynamics coincides with a geometric flow on the three-dimensional slice, driven by the Ricci tensor plus an so(3) gauge connection. The flowing metric is related to the vielbein of the subspace, while the gauge field is inherited from the anti-self-dual component of the four-dimensional Levi-Civita connection.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
Portfolios and the market geometry
NASA Astrophysics Data System (ADS)
Eleutério, Samuel; Araújo, Tanya; Vilela Mendes, R.
2014-09-01
A geometric analysis of return time series, performed in the past, implied that most of the systematic information in the market is contained in a space of small dimension. Here we have explored subspaces of this space to find out the relative performance of portfolios formed from companies that have the largest projections in each one of the subspaces. As expected, it was found that the best performance portfolios are associated with some of the small eigenvalue subspaces and not to the dominant dimensions. This is found to occur in a systematic fashion over an extended period (1990-2008).
Reboredo, Fernando A; Kim, Jeongnim
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
NASA Astrophysics Data System (ADS)
Reboredo, Fernando A.; Kim, Jeongnim
2014-02-01
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Integrated head package cable carrier for a nuclear power plant
Meuschke, Robert E.; Trombola, Daniel M.
1995-01-01
A cabling arrangement is provided for a nuclear reactor located within a containment. Structure inside the containment is characterized by a wall having a near side surrounding the reactor vessel defining a cavity, an operating deck outside the cavity, a sub-space below the deck and on a far side of the wall spaced from the near side, and an operating area above the deck. The arrangement includes a movable frame supporting a plurality of cables extending through the frame, each connectable at a first end to a head package on the reactor vessel and each having a second end located in the sub-space. The frame is movable, with the cables, between a first position during normal operation of the reactor when the cables are connected to the head package, located outside the sub-space proximate the head package, and a second position during refueling when the cables are disconnected from the head package, located in the sub-space. In a preferred embodiment, the frame straddles the top of the wall in a substantially horizontal orientation in the first position, pivots about an end distal from the head package to a substantially vertically oriented intermediate position, and is guided, while remaining about vertically oriented, along a track in the sub-space to the second position.
Low-Rank Tensor Subspace Learning for RGB-D Action Recognition.
Jia, Chengcheng; Fu, Yun
2016-07-09
Since RGB-D action data inherently equip with extra depth information compared with RGB data, recently many works employ RGB-D data in a third-order tensor representation containing spatio-temporal structure to find a subspace for action recognition. However, there are two main challenges of these methods. First, the dimension of subspace is usually fixed manually. Second, preserving local information by finding intraclass and inter-class neighbors from a manifold is highly timeconsuming. In this paper, we learn a tensor subspace, whose dimension is learned automatically by low-rank learning, for RGB-D action recognition. Particularly, the tensor samples are factorized to obtain three Projection Matrices (PMs) by Tucker Decomposition, where all the PMs are performed by nuclear norm in a close-form to obtain the tensor ranks which are used as tensor subspace dimension. Additionally, we extract the discriminant and local information from a manifold using a graph constraint. This graph preserves the local knowledge inherently, which is faster than the previous way by calculating both the intra-class and inter-class neighbors of each sample. We evaluate the proposed method on four widely used RGB-D action datasets including MSRDailyActivity3D, MSRActionPairs, MSRActionPairs skeleton and UTKinect-Action3D datasets, and the experimental results show higher accuracy and efficiency of the proposed method.
Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds
NASA Astrophysics Data System (ADS)
Wesemann, Stefan; Marzetta, Thomas L.
2017-12-01
For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.
Control of the constrained planar simple inverted pendulum
NASA Technical Reports Server (NTRS)
Bavarian, B.; Wyman, B. F.; Hemami, H.
1983-01-01
Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.
2015-11-10
of the ensemble method o the estimation of sensitivities was demonstrated in meteorological Ancell and Hakim, 2007 ; Torn and Hakim, 2008) and...to predetermined low- dimensional subspaces spanned either by the reduced-order approx- imations of the model Green’s functions ( Stammer and Wunsch...2005; Qui et al., 2007 ; Hoteit, 2008). In fact, the 4dEnVar technique pursues a similar, but more general approach, pa- rameterizing the search
On the superconvergence of Galerkin methods for hyperbolic IBVP
NASA Technical Reports Server (NTRS)
Gottlieb, David; Gustafsson, Bertil; Olsson, Pelle; Strand, BO
1993-01-01
Finite element Galerkin methods for periodic first order hyperbolic equations exhibit superconvergence on uniform grids at the nodes, i.e., there is an error estimate 0(h(sup 2r)) instead of the expected approximation order 0(h(sup r)). It will be shown that no matter how the approximating subspace S(sup h) is chosen, the superconvergence property is lost if there are characteristics leaving the domain. The implications of this result when constructing compact implicit difference schemes is also discussed.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Experimental state control by fast non-Abelian holonomic gates with a superconducting qutrit
NASA Astrophysics Data System (ADS)
Danilin, S.; Vepsäläinen, A.; Paraoanu, G. S.
2018-05-01
Quantum state manipulation with gates based on geometric phases acquired during cyclic operations promises inherent fault-tolerance and resilience to local fluctuations in the control parameters. Here we create a general non-Abelian and non-adiabatic holonomic gate acting in the (∣0〉, ∣2〉) subspace of a three-level (qutrit) transmon device fabricated in a fully coplanar design. Experimentally, this is realized by simultaneously coupling the first two transitions by microwave pulses with amplitudes and phases defined such that the condition of parallel transport is fulfilled. We demonstrate the creation of arbitrary superpositions in this subspace by changing the amplitudes of the pulses and the relative phase between them. We use two-photon pulses acting in the holonomic subspace to reveal the coherence of the state created by the geometric gate pulses and to prepare different superposition states. We also test the action of holonomic NOT and Hadamard gates on superpositions in the (| 0> ,| 2> ) subspace.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
NASA Astrophysics Data System (ADS)
Macieszczak, Katarzyna; Zhou, YanLi; Hofferberth, Sebastian; Garrahan, Juan P.; Li, Weibin; Lesanovsky, Igor
2017-10-01
We investigate the dynamics of a generic interacting many-body system under conditions of electromagnetically induced transparency (EIT). This problem is of current relevance due to its connection to nonlinear optical media realized by Rydberg atoms. In an interacting system the structure of the dynamics and the approach to the stationary state becomes far more complex than in the case of conventional EIT. In particular, we discuss the emergence of a metastable decoherence-free subspace, whose dimension for a single Rydberg excitation grows linearly in the number of atoms. On approach to stationarity this leads to a slow dynamics, which renders the typical assumption of fast relaxation invalid. We derive analytically the effective nonequilibrium dynamics in the decoherence-free subspace, which features coherent and dissipative two-body interactions. We discuss the use of this scenario for the preparation of collective entangled dark states and the realization of general unitary dynamics within the spin-wave subspace.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Random ensemble learning for EEG classification.
Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid
2018-01-01
Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
High-resolution dynamic 31 P-MRSI using a low-rank tensor model.
Ma, Chao; Clifford, Bryan; Liu, Yuchi; Gu, Yuning; Lam, Fan; Yu, Xin; Liang, Zhi-Pei
2017-08-01
To develop a rapid 31 P-MRSI method with high spatiospectral resolution using low-rank tensor-based data acquisition and image reconstruction. The multidimensional image function of 31 P-MRSI is represented by a low-rank tensor to capture the spatial-spectral-temporal correlations of data. A hybrid data acquisition scheme is used for sparse sampling, which consists of a set of "training" data with limited k-space coverage to capture the subspace structure of the image function, and a set of sparsely sampled "imaging" data for high-resolution image reconstruction. An explicit subspace pursuit approach is used for image reconstruction, which estimates the bases of the subspace from the "training" data and then reconstructs a high-resolution image function from the "imaging" data. We have validated the feasibility of the proposed method using phantom and in vivo studies on a 3T whole-body scanner and a 9.4T preclinical scanner. The proposed method produced high-resolution static 31 P-MRSI images (i.e., 6.9 × 6.9 × 10 mm 3 nominal resolution in a 15-min acquisition at 3T) and high-resolution, high-frame-rate dynamic 31 P-MRSI images (i.e., 1.5 × 1.5 × 1.6 mm 3 nominal resolution, 30 s/frame at 9.4T). Dynamic spatiospectral variations of 31 P-MRSI signals can be efficiently represented by a low-rank tensor. Exploiting this mathematical structure for data acquisition and image reconstruction can lead to fast 31 P-MRSI with high resolution, frame-rate, and SNR. Magn Reson Med 78:419-428, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
On Convergence of Extended Dynamic Mode Decomposition to the Koopman Operator
NASA Astrophysics Data System (ADS)
Korda, Milan; Mezić, Igor
2018-04-01
Extended dynamic mode decomposition (EDMD) (Williams et al. in J Nonlinear Sci 25(6):1307-1346, 2015) is an algorithm that approximates the action of the Koopman operator on an N-dimensional subspace of the space of observables by sampling at M points in the state space. Assuming that the samples are drawn either independently or ergodically from some measure μ , it was shown in Klus et al. (J Comput Dyn 3(1):51-79, 2016) that, in the limit as M→ ∞, the EDMD operator K_{N,M} converges to K_N, where K_N is the L_2(μ )-orthogonal projection of the action of the Koopman operator on the finite-dimensional subspace of observables. We show that, as N → ∞, the operator K_N converges in the strong operator topology to the Koopman operator. This in particular implies convergence of the predictions of future values of a given observable over any finite time horizon, a fact important for practical applications such as forecasting, estimation and control. In addition, we show that accumulation points of the spectra of K_N correspond to the eigenvalues of the Koopman operator with the associated eigenfunctions converging weakly to an eigenfunction of the Koopman operator, provided that the weak limit of the eigenfunctions is nonzero. As a by-product, we propose an analytic version of the EDMD algorithm which, under some assumptions, allows one to construct K_N directly, without the use of sampling. Finally, under additional assumptions, we analyze convergence of K_{N,N} (i.e., M=N), proving convergence, along a subsequence, to weak eigenfunctions (or eigendistributions) related to the eigenmeasures of the Perron-Frobenius operator. No assumptions on the observables belonging to a finite-dimensional invariant subspace of the Koopman operator are required throughout.
On spectral synthesis on zero-dimensional Abelian groups
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2013-09-01
Let G be a zero-dimensional locally compact Abelian group all of whose elements are compact, and let C(G) be the space of all complex-valued continuous functions on G. A closed linear subspace \\mathscr H\\subseteq C(G) is said to be an invariant subspace if it is invariant with respect to the translations \\tau_y\\colon f(x)\\mapsto f(x+y), y\\in G. In the paper, it is proved that any invariant subspace \\mathscr H admits spectral synthesis, that is, \\mathscr H coincides with the closed linear span of the characters of G belonging to \\mathscr H. Bibliography: 25 titles.
Essential uncontrollability of discrete linear, time-invariant, dynamical systems
NASA Technical Reports Server (NTRS)
Cliff, E. M.
1975-01-01
The concept of a 'best approximating m-dimensional subspace' for a given set of vectors in n-dimensional whole space is introduced. Such a subspace is easily described in terms of the eigenvectors of an associated Gram matrix. This technique is used to approximate an achievable set for a discrete linear time-invariant dynamical system. This approximation characterizes the part of the state space that may be reached using modest levels of control. If the achievable set can be closely approximated by a proper subspace of the whole space then the system is 'essentially uncontrollable'. The notion finds application in studies of failure-tolerant systems, and in decoupling.
Reduced order modeling and active flow control of an inlet duct
NASA Astrophysics Data System (ADS)
Ge, Xiaoqing
Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.
A Gaussian-based rank approximation for subspace clustering
NASA Astrophysics Data System (ADS)
Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping
2018-04-01
Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
A Problem-Centered Approach to Canonical Matrix Forms
ERIC Educational Resources Information Center
Sylvestre, Jeremy
2014-01-01
This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.
Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun
2018-05-08
Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.
Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang
2018-03-27
Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
BI-sparsity pursuit for robust subspace recovery
Bian, Xiao; Krim, Hamid
2015-09-01
Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Basis adaptation in homogeneous chaos spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Ghanem, Roger
2014-02-01
We present a new meth for the characterization of subspaces associated with low-dimensional quantities of interet (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
A real negative selection algorithm with evolutionary preference for anomaly detection
NASA Astrophysics Data System (ADS)
Yang, Tao; Chen, Wen; Li, Tao
2017-04-01
Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model
NASA Astrophysics Data System (ADS)
Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.
2018-03-01
Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine
2016-01-01
Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ . SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the NIRS measurements, facilitating the use of NIRS as a surrogate measure for cerebral blood flow (CBF). The second case study used data from a 3-years old infant under Extra Corporeal Membrane Oxygenation (ECMO), here SIDE-ObSP decomposed cerebral/peripheral tissue oxygenation, as a sum of the partial contributions from different systemic variables, facilitating the comparison between the effects of each systemic variable on the cerebral/peripheral hemodynamics.
Colliandre, Lionel; Le Guilloux, Vincent; Bourg, Stephane; Morin-Allory, Luc
2012-02-27
High Throughput Screening (HTS) is a standard technique widely used to find hit compounds in drug discovery projects. The high costs associated with such experiments have highlighted the need to carefully design screening libraries in order to avoid wasting resources. Molecular diversity is an established concept that has been used to this end for many years. In this article, a new approach to quantify the molecular diversity of screening libraries is presented. The approach is based on the Delimited Reference Chemical Subspace (DRCS) methodology, a new method that can be used to delimit the densest subspace spanned by a reference library in a reduced 2D continuous space. A total of 22 diversity indices were implemented or adapted to this methodology, which is used here to remove outliers and obtain a relevant cell-based partition of the subspace. The behavior of these indices was assessed and compared in various extreme situations and with respect to a set of theoretical rules that a diversity function should satisfy when libraries of different sizes have to be compared. Some gold standard indices are found inappropriate in such a context, while none of the tested indices behave perfectly in all cases. Five DRCS-based indices accounting for different aspects of diversity were finally selected, and a simple framework is proposed to use them effectively. Various libraries have been profiled with respect to more specific subspaces, which further illustrate the interest of the method.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Qu, Jinxiu; Zhang, Chenxuan; Cheng, Wei; Li, Bing
2014-06-01
Viscoelastic sandwich structures (VSS) are widely used in mechanical equipment; their state assessment is necessary to detect structural states and to keep equipment running with high reliability. This paper proposes a novel manifold-manifold distance-based assessment (M2DBA) method for assessing the looseness state in VSSs. In the M2DBA method, a manifold-manifold distance is viewed as a health index. To design the index, response signals from the structure are firstly acquired by condition monitoring technology and a Hankel matrix is constructed by using the response signals to describe state patterns of the VSS. Thereafter, a subspace analysis method, that is, principal component analysis (PCA), is performed to extract the condition subspace hidden in the Hankel matrix. From the subspace, pattern changes in dynamic structural properties are characterized. Further, a Grassmann manifold (GM) is formed by organizing a set of subspaces. The manifold is mapped to a reproducing kernel Hilbert space (RKHS), where support vector data description (SVDD) is used to model the manifold as a hypersphere. Finally, a health index is defined as the cosine of the angle between the hypersphere centers corresponding to the structural baseline state and the looseness state. The defined health index contains similarity information existing in the two structural states, so structural looseness states can be effectively identified. Moreover, the health index is derived by analysis of the global properties of subspace sets, which is different from traditional subspace analysis methods. The effectiveness of the health index for state assessment is validated by test data collected from a VSS subjected to different degrees of looseness. The results show that the health index is a very effective metric for detecting the occurrence and extension of structural looseness. Comparison results indicate that the defined index outperforms some existing state-of-the-art ones.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Semigroup theory and numerical approximation for equations in linear viscoelasticity
NASA Technical Reports Server (NTRS)
Fabiano, R. H.; Ito, K.
1990-01-01
A class of abstract integrodifferential equations used to model linear viscoelastic beams is investigated analytically, applying a Hilbert-space approach. The basic equation is rewritten as a Cauchy problem, and its well-posedness is demonstrated. Finite-dimensional subspaces of the state space and an estimate of the state operator are obtained; approximation schemes for the equations are constructed; and the convergence is proved using the Trotter-Kato theorem of linear semigroup theory. The actual convergence behavior of different approximations is demonstrated in numerical computations, and the results are presented in tables.
An efficient linear-scaling CCSD(T) method based on local natural orbitals.
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-07
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Geometric MCMC for infinite-dimensional inverse problems
NASA Astrophysics Data System (ADS)
Beskos, Alexandros; Girolami, Mark; Lan, Shiwei; Farrell, Patrick E.; Stuart, Andrew M.
2017-04-01
Bayesian inverse problems often involve sampling posterior distributions on infinite-dimensional function spaces. Traditional Markov chain Monte Carlo (MCMC) algorithms are characterized by deteriorating mixing times upon mesh-refinement, when the finite-dimensional approximations become more accurate. Such methods are typically forced to reduce step-sizes as the discretization gets finer, and thus are expensive as a function of dimension. Recently, a new class of MCMC methods with mesh-independent convergence times has emerged. However, few of them take into account the geometry of the posterior informed by the data. At the same time, recently developed geometric MCMC algorithms have been found to be powerful in exploring complicated distributions that deviate significantly from elliptic Gaussian laws, but are in general computationally intractable for models defined in infinite dimensions. In this work, we combine geometric methods on a finite-dimensional subspace with mesh-independent infinite-dimensional approaches. Our objective is to speed up MCMC mixing times, without significantly increasing the computational cost per step (for instance, in comparison with the vanilla preconditioned Crank-Nicolson (pCN) method). This is achieved by using ideas from geometric MCMC to probe the complex structure of an intrinsic finite-dimensional subspace where most data information concentrates, while retaining robust mixing times as the dimension grows by using pCN-like methods in the complementary subspace. The resulting algorithms are demonstrated in the context of three challenging inverse problems arising in subsurface flow, heat conduction and incompressible flow control. The algorithms exhibit up to two orders of magnitude improvement in sampling efficiency when compared with the pCN method.
Application of Bred Vectors To Data Assimilation
NASA Astrophysics Data System (ADS)
Corazza, M.; Kalnay, E.; Patil, Dj
We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 18351851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3
Density scaling for multiplets
NASA Astrophysics Data System (ADS)
Nagy, Á.
2011-02-01
Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.
Primary decomposition of zero-dimensional ideals over finite fields
NASA Astrophysics Data System (ADS)
Gao, Shuhong; Wan, Daqing; Wang, Mingsheng
2009-03-01
A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.
Switching LPV Control for High Performance Tactical Aircraft
NASA Technical Reports Server (NTRS)
Lu, Bei; Wu, Fen; Kim, SungWan
2004-01-01
This paper examines a switching Linear Parameter-Varying (LPV) control approach to determine if it is practical to use for flight control designs within a wide angle of attack region. The approach is based on multiple parameter-dependent Lyapunov functions. The full parameter space is partitioned into overlapping subspaces and a family of LPV controllers are designed, each suitable for a specific parameter subspace. The hysteresis switching logic is used to accomplish the transition among different parameter subspaces. The proposed switching LPV control scheme is applied to an F-16 aircraft model with different actuator dynamics in low and high angle of attack regions. The nonlinear simulation results show that the aircraft performs well when switching among different angle of attack regions.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
NASA Astrophysics Data System (ADS)
Li, Dan; Xu, Feng; Jiang, Jing Fei; Zhang, Jian Qiu
2017-12-01
In this paper, a biquaternion beamspace, constructed by projecting the original data of an electromagnetic vector-sensor array into a subspace of a lower dimension via a quaternion transformation matrix, is first proposed. To estimate the direction and polarization angles of sources, biquaternion beamspace multiple signal classification (BB-MUSIC) estimators are then formulated. The analytical results show that the biquaternion beamspaces offer us some additional degrees of freedom to simultaneously achieve three goals. One is to save the memory spaces for storing the data covariance matrix and reduce the computation efforts of the eigen-decomposition. Another is to decouple the estimations of the sources' polarization parameters from those of their direction angles. The other is to blindly whiten the coherent noise of the six constituent antennas in each vector-sensor. It is also shown that the existing biquaternion multiple signal classification (BQ-MUSIC) estimator is a specific case of our BB-MUSIC ones. The simulation results verify the correctness and effectiveness of the analytical ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Reboredo, Fernando A.
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systemsmore » near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.« less
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
NASA Astrophysics Data System (ADS)
Pavluchenko, Sergey A.; Toporensky, Alexey
2018-05-01
In this paper we address two important issues which could affect reaching the exponential and Kasner asymptotes in Einstein-Gauss-Bonnet cosmologies—spatial curvature and anisotropy in both three- and extra-dimensional subspaces. In the first part of the paper we consider the cosmological evolution of spaces that are the product of two isotropic and spatially curved subspaces. It is demonstrated that the dynamics in D=2 (the number of extra dimensions) and D ≥ 3 is different. It was already known that for the Λ -term case there is a regime with "stabilization" of extra dimensions, where the expansion rate of the three-dimensional subspace as well as the scale factor (the "size") associated with extra dimensions reaches a constant value. This regime is achieved if the curvature of the extra dimensions is negative. We demonstrate that it takes place only if the number of extra dimensions is D ≥ 3. In the second part of the paper we study the influence of the initial anisotropy. Our study reveals that the transition from Gauss-Bonnet Kasner regime to anisotropic exponential expansion (with three expanding and contracting extra dimensions) is stable with respect to breaking the symmetry within both three- and extra-dimensional subspaces. However, the details of the dynamics in D=2 and D ≥ 3 are different. Combining the two described effects allows us to construct a scenario in D ≥ 3, where isotropization of outer and inner subspaces is reached dynamically from rather general anisotropic initial conditions.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
Polarimetric subspace target detector for SAR data based on the Huynen dihedral model
NASA Astrophysics Data System (ADS)
Larson, Victor J.; Novak, Leslie M.
1995-06-01
Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.
On spectral synthesis on element-wise compact Abelian groups
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2015-08-01
Let G be an arbitrary locally compact Abelian group and let C(G) be the space of all continuous complex-valued functions on G. A closed linear subspace \\mathscr H\\subseteq C(G) is referred to as an invariant subspace if it is invariant with respect to the shifts τ_y\\colon f(x)\\mapsto f(xy), y\\in G. By definition, an invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis if \\mathscr H coincides with the closure in C(G) of the linear span of all characters of G belonging to \\mathscr H. We say that strict spectral synthesis holds in the space C(G) on G if every invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis. An element x of a topological group G is said to be compact if x is contained in some compact subgroup of G. A group G is said to be element-wise compact if all elements of G are compact. The main result of the paper is the proof of the fact that strict spectral synthesis holds in C(G) for a locally compact Abelian group G if and only if G is element-wise compact. Bibliography: 14 titles.
Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network
NASA Astrophysics Data System (ADS)
Funabashi, Masatoshi
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...
2017-10-16
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao
2016-11-25
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
A sub-space greedy search method for efficient Bayesian Network inference.
Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing
2011-09-01
Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiao-Tian; Yang, Xiao-Bao; Zhao, Yu-Jun
2017-04-01
We have developed an extended distance matrix approach to study the molecular geometric configuration through spectral decomposition. It is shown that the positions of all atoms in the eigen-space can be specified precisely by their eigen-coordinates, while the refined atomic eigen-subspace projection array adopted in our approach is demonstrated to be a competent invariant in structure comparison. Furthermore, a visual eigen-subspace projection function (EPF) is derived to characterize the surrounding configuration of an atom naturally. A complete set of atomic EPFs constitute an intrinsic representation of molecular conformation, based on which the interatomic EPF distance and intermolecular EPF distance can be reasonably defined. Exemplified with a few cases, the intermolecular EPF distance shows exceptional rationality and efficiency in structure recognition and comparison.
NASA Astrophysics Data System (ADS)
Lewandowski, Jerzy; Lin, Chun-Yen
2017-03-01
We explicitly solved the anomaly-free quantum constraints proposed by Tomlin and Varadarajan for the weak Euclidean model of canonical loop quantum gravity, in a large subspace of the model's kinematic Hilbert space, which is the space of the charge network states. In doing so, we first identified the subspace on which each of the constraints acts convergingly, and then by explicitly evaluating such actions we found the complete set of the solutions in the identified subspace. We showed that the space of solutions consists of two classes of states, with the first class having a property that involves the condition known from the Minkowski theorem on polyhedra, and the second class satisfying a weaker form of the spatial diffeomorphism invariance.
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
NASA Astrophysics Data System (ADS)
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-10-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections.
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Quantum repeaters based on trapped ions with decoherence-free subspace encoding
NASA Astrophysics Data System (ADS)
Zwerger, M.; Lanyon, B. P.; Northup, T. E.; Muschik, C. A.; Dür, W.; Sangouard, N.
2017-12-01
Quantum repeaters provide an efficient solution to distribute Bell pairs over arbitrarily long distances. While scalable architectures are demanding regarding the number of qubits that need to be controlled, here we present a quantum repeater scheme aiming to extend the range of present day quantum communications that could be implemented in the near future with trapped ions in cavities. We focus on an architecture where ion-photon entangled states are created locally and subsequently processed with linear optics to create elementary links of ion-ion entangled states. These links are then used to distribute entangled pairs over long distances using successive entanglement swapping operations performed using deterministic ion-ion gates. We show how this architecture can be implemented while encoding the qubits in a decoherence-free subspace to protect them against collective dephasing. This results in a protocol that can be used to violate a Bell inequality over distances of about 800 km assuming state-of-the-art parameters. We discuss how this could be improved to several thousand kilometres in future setups.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Constraint-Free Theories of Gravitation
NASA Technical Reports Server (NTRS)
Estabrook, Frank B.; Robinson, R. Steve; Wahlquist, Hugo D.
1998-01-01
Lovelock actions (more precisely, extended Gauss-Bonnet forms) when varied as Cartan forms on subspaces of higher dimensional flat Riemannian manifolds, generate well set, causal exterior differential systems. In particular, the Einstein- Hilbert action 4-form, varied on a 4 dimensional subspace of E(sub 10) yields a well set generalized theory of gravity having no constraints. Rcci-flat solutions are selected by initial conditions on a bounding 3-space.
NASA Astrophysics Data System (ADS)
Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin
2016-12-01
Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E
2018-06-12
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Zi, Yanyang; Yan, Ruqiang
2017-09-01
The gearbox of a wind turbine (WT) has dominant failure rates and highest downtime loss among all WT subsystems. Thus, gearbox health assessment for maintenance cost reduction is of paramount importance. The concurrence of multiple faults in gearbox components is a common phenomenon due to fault induction mechanism. This problem should be considered before planning to replace the components of the WT gearbox. Therefore, the key fault patterns should be reliably identified from noisy observation data for the development of an effective maintenance strategy. However, most of the existing studies focusing on multiple fault diagnosis always suffer from inappropriate division of fault information in order to satisfy various rigorous decomposition principles or statistical assumptions, such as the smooth envelope principle of ensemble empirical mode decomposition and the mutual independence assumption of independent component analysis. Thus, this paper presents a joint subspace learning-based multiple fault detection (JSL-MFD) technique to construct different subspaces adaptively for different fault patterns. Its main advantage is its capability to learn multiple fault subspaces directly from the observation signal itself. It can also sparsely concentrate the feature information into a few dominant subspace coefficients. Furthermore, it can eliminate noise by simply performing coefficient shrinkage operations. Consequently, multiple fault patterns are reliably identified by utilizing the maximum fault information criterion. The superiority of JSL-MFD in multiple fault separation and detection is comprehensively investigated and verified by the analysis of a data set of a 750 kW WT gearbox. Results show that JSL-MFD is superior to a state-of-the-art technique in detecting hidden fault patterns and enhancing detection accuracy.
2012-01-01
Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582
Application of Earthquake Subspace Detectors at Kilauea and Mauna Loa Volcanoes, Hawai`i
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Yeck, W.
2016-12-01
Recent studies have demonstrated the capabilities of earthquake subspace detectors for detailed cataloging and tracking of seismicity in a number of regions and settings. We are exploring the application of subspace detectors at the United States Geological Survey's Hawaiian Volcano Observatory (HVO) to analyze seismicity at Kilauea and Mauna Loa volcanoes. Elevated levels of microseismicity and occasional swarms of earthquakes associated with active volcanism here present cataloging challenges due the sheer numbers of earthquakes and an intrinsically low signal-to-noise environment featuring oceanic microseism and volcanic tremor in the ambient seismic background. With high-quality continuous recording of seismic data at HVO, we apply subspace detectors (Harris and Dodge, 2011, Bull. Seismol. Soc. Am., doi: 10.1785/0120100103) during intervals of noteworthy seismicity. Waveform templates are drawn from Magnitude 2 and larger earthquakes within clusters of earthquakes cataloged in the HVO seismic database. At Kilauea, we focus on seismic swarms in the summit caldera region where, despite continuing eruptions from vents in the summit region and in the east rift zone, geodetic measurements reflect a relatively inflated volcanic state. We also focus on seismicity beneath and adjacent to Mauna Loa's summit caldera that appears to be associated with geodetic expressions of gradual volcanic inflation, and where precursory seismicity clustered prior to both Mauna Loa's most recent eruptions in 1975 and 1984. We recover several times more earthquakes with the subspace detectors - down to roughly 2 magnitude units below the templates, based on relative amplitudes - compared to the numbers of cataloged earthquakes. The increased numbers of detected earthquakes in these clusters, and the ability to associate and locate them, allow us to infer details of the spatial and temporal distributions and possible variations in stresses within these key regions of the volcanoes.
Direct lifts of coupled cell networks
NASA Astrophysics Data System (ADS)
Dias, A. P. S.; Moreira, C. S.
2018-04-01
In networks of dynamical systems, there are spaces defined in terms of equalities of cell coordinates which are flow-invariant under any dynamical system that has a form consistent with the given underlying network structure—the network synchrony subspaces. Given a network and one of its synchrony subspaces, any system with a form consistent with the network, restricted to the synchrony subspace, defines a new system which is consistent with a smaller network, called the quotient network of the original network by the synchrony subspace. Moreover, any system associated with the quotient can be interpreted as the restriction to the synchrony subspace of a system associated with the original network. We call the larger network a lift of the smaller network, and a lift can be interpreted as a result of the cellular splitting of the smaller network. In this paper, we address the question of the uniqueness in this lifting process in terms of the networks’ topologies. A lift G of a given network Q is said to be direct when there are no intermediate lifts of Q between them. We provide necessary and sufficient conditions for a lift of a general network to be direct. Our results characterize direct lifts using the subnetworks of all splitting cells of Q and of all split cells of G. We show that G is a direct lift of Q if and only if either the split subnetwork is a direct lift or consists of two copies of the splitting subnetwork. These results are then applied to the class of regular uniform networks and to the special classes of ring networks and acyclic networks. We also illustrate that one of the applications of our results is to the lifting bifurcation problem.
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S
2018-06-01
Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.
2018-06-01
Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.
Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.
Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng
2018-05-14
In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
A periodic spatio-spectral filter for event-related potentials.
Ghaderi, Foad; Kim, Su Kyoung; Kirchner, Elsa Andrea
2016-12-01
With respect to single trial detection of event-related potentials (ERPs), spatial and spectral filters are two of the most commonly used pre-processing techniques for signal enhancement. Spatial filters reduce the dimensionality of the data while suppressing the noise contribution and spectral filters attenuate frequency components that most likely belong to noise subspace. However, the frequency spectrum of ERPs overlap with that of the ongoing electroencephalogram (EEG) and different types of artifacts. Therefore, proper selection of the spectral filter cutoffs is not a trivial task. In this research work, we developed a supervised method to estimate the spatial and finite impulse response (FIR) spectral filters, simultaneously. We evaluated the performance of the method on offline single trial classification of ERPs in datasets recorded during an oddball paradigm. The proposed spatio-spectral filter improved the overall single-trial classification performance by almost 9% on average compared with the case that no spatial filters were used. We also analyzed the effects of different spectral filter lengths and the number of retained channels after spatial filtering. Copyright © 2016. Published by Elsevier Ltd.
Faceting for direction-dependent spectral deconvolution
NASA Astrophysics Data System (ADS)
Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.
2018-04-01
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
NASA Technical Reports Server (NTRS)
Bykhovskiy, E. B.; Smirnov, N. V.
1983-01-01
The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.
Highly Entangled, Non-random Subspaces of Tensor Products from Quantum Groups
NASA Astrophysics Data System (ADS)
Brannan, Michael; Collins, Benoît
2018-03-01
In this paper we describe a class of highly entangled subspaces of a tensor product of finite-dimensional Hilbert spaces arising from the representation theory of free orthogonal quantum groups. We determine their largest singular values and obtain lower bounds for the minimum output entropy of the corresponding quantum channels. An application to the construction of d-positive maps on matrix algebras is also presented.
Quantum error suppression with commuting Hamiltonians: two local is too local.
Marvian, Iman; Lidar, Daniel A
2014-12-31
We consider error suppression schemes in which quantum information is encoded into the ground subspace of a Hamiltonian comprising a sum of commuting terms. Since such Hamiltonians are gapped, they are considered natural candidates for protection of quantum information and topological or adiabatic quantum computation. However, we prove that they cannot be used to this end in the two-local case. By making the favorable assumption that the gap is infinite, we show that single-site perturbations can generate a degeneracy splitting in the ground subspace of this type of Hamiltonian which is of the same order as the magnitude of the perturbation, and is independent of the number of interacting sites and their Hilbert space dimensions, just as in the absence of the protecting Hamiltonian. This splitting results in decoherence of the ground subspace, and we demonstrate that for natural noise models the coherence time is proportional to the inverse of the degeneracy splitting. Our proof involves a new version of the no-hiding theorem which shows that quantum information cannot be approximately hidden in the correlations between two quantum systems. The main reason that two-local commuting Hamiltonians cannot be used for quantum error suppression is that their ground subspaces have only short-range (two-body) entanglement.
Zhang, Zhao; Yan, Shuicheng; Zhao, Mingbo
2014-05-01
Latent Low-Rank Representation (LatLRR) delivers robust and promising results for subspace recovery and feature extraction through mining the so-called hidden effects, but the locality of both similar principal and salient features cannot be preserved in the optimizations. To solve this issue for achieving enhanced performance, a boosted version of LatLRR, referred to as Regularized Low-Rank Representation (rLRR), is proposed through explicitly including an appropriate Laplacian regularization that can maximally preserve the similarity among local features. Resembling LatLRR, rLRR decomposes given data matrix from two directions by seeking a pair of low-rank matrices. But the similarities of principal and salient features can be effectively preserved by rLRR. As a result, the correlated features are well grouped and the robustness of representations is also enhanced. Based on the outputted bi-directional low-rank codes by rLRR, an unsupervised subspace learning framework termed Low-rank Similarity Preserving Projections (LSPP) is also derived for feature learning. The supervised extension of LSPP is also discussed for discriminant subspace learning. The validity of rLRR is examined by robust representation and decomposition of real images. Results demonstrated the superiority of our rLRR and LSPP in comparison to other related state-of-the-art algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
NASA Astrophysics Data System (ADS)
Avdyushev, Victor A.
2017-12-01
Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.
Functional Connectivity’s Degenerate View of Brain Computation
Giron, Alain; Rudrauf, David
2016-01-01
Brain computation relies on effective interactions between ensembles of neurons. In neuroimaging, measures of functional connectivity (FC) aim at statistically quantifying such interactions, often to study normal or pathological cognition. Their capacity to reflect a meaningful variety of patterns as expected from neural computation in relation to cognitive processes remains debated. The relative weights of time-varying local neurophysiological dynamics versus static structural connectivity (SC) in the generation of FC as measured remains unsettled. Empirical evidence features mixed results: from little to significant FC variability and correlation with cognitive functions, within and between participants. We used a unified approach combining multivariate analysis, bootstrap and computational modeling to characterize the potential variety of patterns of FC and SC both qualitatively and quantitatively. Empirical data and simulations from generative models with different dynamical behaviors demonstrated, largely irrespective of FC metrics, that a linear subspace with dimension one or two could explain much of the variability across patterns of FC. On the contrary, the variability across BOLD time-courses could not be reduced to such a small subspace. FC appeared to strongly reflect SC and to be partly governed by a Gaussian process. The main differences between simulated and empirical data related to limitations of DWI-based SC estimation (and SC itself could then be estimated from FC). Above and beyond the limited dynamical range of the BOLD signal itself, measures of FC may offer a degenerate representation of brain interactions, with limited access to the underlying complexity. They feature an invariant common core, reflecting the channel capacity of the network as conditioned by SC, with a limited, though perhaps meaningful residual variability. PMID:27736900
Online estimation of internal stack temperatures in solid oxide fuel cell power generating units
NASA Astrophysics Data System (ADS)
Dolenc, B.; Vrečko, D.; Juričić, Ɖ.; Pohjoranta, A.; Pianese, C.
2016-12-01
Thermal stress is one of the main factors affecting the degradation rate of solid oxide fuel cell (SOFC) stacks. In order to mitigate the possibility of fatal thermal stress, stack temperatures and the corresponding thermal gradients need to be continuously controlled during operation. Due to the fact that in future commercial applications the use of temperature sensors embedded within the stack is impractical, the use of estimators appears to be a viable option. In this paper we present an efficient and consistent approach to data-driven design of the estimator for maximum and minimum stack temperatures intended (i) to be of high precision, (ii) to be simple to implement on conventional platforms like programmable logic controllers, and (iii) to maintain reliability in spite of degradation processes. By careful application of subspace identification, supported by physical arguments, we derive a simple estimator structure capable of producing estimates with 3% error irrespective of the evolving stack degradation. The degradation drift is handled without any explicit modelling. The approach is experimentally validated on a 10 kW SOFC system.
The AFLOW Standard for High-throughput Materials Science Calculations
2015-01-01
84602, USA fDepartment of Physics and Department of Chemistry, University of North Texas, Denton, TX 76203, USA gMaterials Science, Electrical ...inversion in the iterative subspace (RMM– DIIS ) [10]. Of the two, DBS is known to be the slower and more stable option. Additionally, the subspace...RMM– DIIS steps as needed to fulfill the dEelec condition. Later determinations of system forces are performed by a similar sequence, but only a single
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yodgorov, G R; Ismail, F; Muminov, Z I
2014-12-31
We consider a certain model operator acting in a subspace of a fermionic Fock space. We obtain an analogue of Faddeev's equation. We describe the location of the essential spectrum of the operator under consideration and show that the essential spectrum consists of the union of at most four segments. Bibliography: 19 titles.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Velikina, Julia V.; Samsonov, Alexey A.
2014-01-01
Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboredo, Fernando A.; Kim, Jeongnim
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less
About the Subdivision of Indoor Spaces in Indoorgml
NASA Astrophysics Data System (ADS)
Diakité, A. A.; Zlatanova, S.; Li, K.-J.
2017-10-01
Boosted by the dynamic urbanization of cities, indoor environments are getting more and more complex in order to be able to host people properly. While most of our time is spent inside buildings, the need of GIS tools to assist our daily activities that can become tedious, such as indoor navigation or facility management, became more and more urgent. In that perspective, the IndoorGML standard is aiming to address the gaps left by other standards regarding the spatial modelling for indoor navigation. It includes several concepts such as the organization of the spaces into cells along with their network representation and the possibility to represent multiple connected layers. However, being at its first stage, several concepts of the standard could be improved. One of these is the cell subspacing that is not enough discussed in the current version of the standard. In this paper, we explore all the aspects involved in the subdivision process, from the identification of the navigable and non-navigable space cells to the generation of a navigation graph. We propose several criteria on which the indoor sub-spacing can rely to be automatically performed and and illustrate them on a 3D indoor model.
NASA Astrophysics Data System (ADS)
Borgelt, Christian
In clustering we often face the situation that only a subset of the available attributes is relevant for forming clusters, even though this may not be known beforehand. In such cases it is desirable to have a clustering algorithm that automatically weights attributes or even selects a proper subset. In this paper I study such an approach for fuzzy clustering, which is based on the idea to transfer an alternative to the fuzzifier (Klawonn and Höppner, What is fuzzy about fuzzy clustering? Understanding and improving the concept of the fuzzifier, In: Proc. 5th Int. Symp. on Intelligent Data Analysis, 254-264, Springer, Berlin, 2003) to attribute weighting fuzzy clustering (Keller and Klawonn, Int J Uncertain Fuzziness Knowl Based Syst 8:735-746, 2000). In addition, by reformulating Gustafson-Kessel fuzzy clustering, a scheme for weighting and selecting principal axes can be obtained. While in Borgelt (Feature weighting and feature selection in fuzzy clustering, In: Proc. 17th IEEE Int. Conf. on Fuzzy Systems, IEEE Press, Piscataway, NJ, 2008) I already presented such an approach for a global selection of attributes and principal axes, this paper extends it to a cluster-specific selection, thus arriving at a fuzzy subspace clustering algorithm (Parsons, Haque, and Liu, 2004).
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace
NASA Astrophysics Data System (ADS)
Hou, Z.; Chen, Y.; Tan, K.; Du, P.
2018-04-01
Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Robust transmission of non-Gaussian entanglement over optical fibers
NASA Astrophysics Data System (ADS)
Biswas, Asoka; Lidar, Daniel A.
2006-12-01
We show how the entanglement in a wide range of continuous variable non-Gaussian states can be preserved against decoherence for long-range quantum communication through an optical fiber. We apply protection via decoherence-free subspaces and quantum dynamical decoupling to this end. The latter is implemented by inserting phase shifters at regular intervals Δ inside the fiber, where Δ is roughly the ratio of the speed of light in the fiber to the bath high-frequency cutoff. Detailed estimates of relevant parameters are provided using the boson-boson model of system-bath interaction for silica fibers and Δ is found to be on the order of a millimeter.
Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir
2011-03-15
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.
Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal
2011-06-01
This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.
Tachyon condensation due to domain-wall annihilation in Bose-Einstein condensates.
Takeuchi, Hiromitsu; Kasamatsu, Kenichi; Tsubota, Makoto; Nitta, Muneto
2012-12-14
We show theoretically that a domain-wall annihilation in two-component Bose-Einstein condensates causes tachyon condensation accompanied by spontaneous symmetry breaking in a two-dimensional subspace. Three-dimensional vortex formation from domain-wall annihilations is considered a kink formation in subspace. Numerical experiments reveal that the subspatial dynamics obey the dynamic scaling law of phase-ordering kinetics. This model is experimentally feasible and provides insights into how the extra dimensions influence subspatial phase transition in higher-dimensional space.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST
NASA Astrophysics Data System (ADS)
Hang, Xu; Jun, Zhao
2018-05-01
Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.
Supersensitive ancilla-based adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Larson, Walker; Saleh, Bahaa E. A.
2017-10-01
The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.
Drug-target interaction prediction using ensemble learning and dimensionality reduction.
Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong
2017-10-01
Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.
Information-theoretic limitations on approximate quantum cloning and broadcasting
NASA Astrophysics Data System (ADS)
Lemm, Marius; Wilde, Mark M.
2017-07-01
We prove quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well-known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of n qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control of the performance of generalized universal quantum cloning machines (UQCMs) on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a priori information in the context of cloning. For example, we can control the performance of an antisymmetric analog of the UQCM in recovering from the loss of n -k fermionic particles.
Qudit-Basis Universal Quantum Computation Using χ^{(2)} Interactions.
Niu, Murphy Yuezhen; Chuang, Isaac L; Shapiro, Jeffrey H
2018-04-20
We prove that universal quantum computation can be realized-using only linear optics and χ^{(2)} (three-wave mixing) interactions-in any (n+1)-dimensional qudit basis of the n-pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ^{(2)} Hamiltonians and photon-number operators generate the full u(3) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ^{(2)} interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ^{(2)} interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
Qudit-Basis Universal Quantum Computation Using χ(2 ) Interactions
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-04-01
We prove that universal quantum computation can be realized—using only linear optics and χ(2 ) (three-wave mixing) interactions—in any (n +1 )-dimensional qudit basis of the n -pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ(2 ) Hamiltonians and photon-number operators generate the full u (3 ) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ(2 ) interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ(2 ) interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
Hardware-efficient Bell state preparation using Quantum Zeno Dynamics in superconducting circuits
NASA Astrophysics Data System (ADS)
Flurin, Emmanuel; Blok, Machiel; Hacohen-Gourgy, Shay; Martin, Leigh S.; Livingston, William P.; Dove, Allison; Siddiqi, Irfan
By preforming a continuous joint measurement on a two qubit system, we restrict the qubit evolution to a chosen subspace of the total Hilbert space. This extension of the quantum Zeno effect, called Quantum Zeno Dynamics, has already been explored in various physical systems such as superconducting cavities, single rydberg atoms, atomic ensembles and Bose Einstein condensates. In this experiment, two superconducting qubits are strongly dispersively coupled to a high-Q cavity (χ >> κ) allowing for the doubly excited state | 11 〉 to be selectively monitored. The Quantum Zeno Dynamics in the complementary subspace enables us to coherently prepare a Bell state. As opposed to dissipation engineering schemes, we emphasize that our protocol is deterministic, does not rely direct coupling between qubits and functions only using single qubit controls and cavity readout. Such Quantum Zeno Dynamics can be generalized to larger Hilbert space enabling deterministic generation of many-body entangled states, and thus realizes a decoherence-free subspace allowing alternative noise-protection schemes.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Robust uncertainty evaluation for system identification on distributed wireless platforms
NASA Astrophysics Data System (ADS)
Crinière, Antoine; Döhler, Michael; Le Cam, Vincent; Mevel, Laurent
2016-04-01
Health monitoring of civil structures by system identification procedures from automatic control is now accepted as a valid approach. These methods provide frequencies and modeshapes from the structure over time. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. The underlying algorithms are usually running under Matlab under the assumption of large memory pool and considerable computational power. Even under these premises, computational and memory usage are heavy and not realistic for being embedded in on-site sensor platforms such as the PEGASE platform. Moreover, the current push for distributed wireless systems calls for algorithmic adaptation for lowering data exchanges and maximizing local processing. Finally, the recent breakthrough in system identification allows us to process both frequency information and its related uncertainty together from one and only one data sequence, at the expense of computational and memory explosion that require even more careful attention than before. The current approach will focus on presenting a system identification procedure called multi-setup subspace identification that allows to process both frequencies and their related variances from a set of interconnected wireless systems with all computation running locally within the limited memory pool of each system before being merged on a host supervisor. Careful attention will be given to data exchanges and I/O satisfying OGC standards, as well as minimizing memory footprints and maximizing computational efficiency. Those systems are built in a way of autonomous operations on field and could be later included in a wide distributed architecture such as the Cloud2SM project. The usefulness of these strategies is illustrated on data from a progressive damage action on a prestressed concrete bridge. References [1] E. Carden and P. Fanning. Vibration based condition monitoring: a review. Structural Health Monitoring, 3(4):355-377, 2004. [2] M. Döhler and L. Mevel. Efficient multi-order uncertainty computation for stochastic subspace identification. Mechanical Systems and Signal Processing, 38(2):346-366, 2013. [3] M.Döhler, L. Mevel. Modular subspace-based system identification from multi-setup measurements. IEEE Transactions on Automatic Control, 57(11):2951-2956, 2012. [4] M. Döhler, X.-B. Lam, and L. Mevel. Uncertainty quantification for modal parameters from stochastic subspace identification on multi-setup measurements. MechanicalSystems and Signal Processing, 36(2):562-581, 2013. [5] A Crinière, J Dumoulin, L Mevel, G Andrade-Barosso, M Simonin. The Cloud2SM Project.European Geosciences Union General Assembly (EGU2015), Apr 2015, Vienne, Austria. 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel
Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Analytical minimization of synchronicity errors in stochastic identification
NASA Astrophysics Data System (ADS)
Bernal, D.
2018-01-01
An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.
Data-driven monitoring for stochastic systems and its application on batch process
NASA Astrophysics Data System (ADS)
Yin, Shen; Ding, Steven X.; Haghani Abandan Sari, Adel; Hao, Haiyang
2013-07-01
Batch processes are characterised by a prescribed processing of raw materials into final products for a finite duration and play an important role in many industrial sectors due to the low-volume and high-value products. Process dynamics and stochastic disturbances are inherent characteristics of batch processes, which cause monitoring of batch processes a challenging problem in practice. To solve this problem, a subspace-aided data-driven approach is presented in this article for batch process monitoring. The advantages of the proposed approach lie in its simple form and its abilities to deal with stochastic disturbances and process dynamics existing in the process. The kernel density estimation, which serves as a non-parametric way of estimating the probability density function, is utilised for threshold calculation. An industrial benchmark of fed-batch penicillin production is finally utilised to verify the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Kokurin, M. Yu.
2010-11-01
A general scheme for improving approximate solutions to irregular nonlinear operator equations in Hilbert spaces is proposed and analyzed in the presence of errors. A modification of this scheme designed for equations with quadratic operators is also examined. The technique of universal linear approximations of irregular equations is combined with the projection onto finite-dimensional subspaces of a special form. It is shown that, for finite-dimensional quadratic problems, the proposed scheme provides information about the global geometric properties of the intersections of quadrics.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
NASA Astrophysics Data System (ADS)
Wang, F.; Huang, Y.-Y.; Zhang, Z.-Y.; Zu, C.; Hou, P.-Y.; Yuan, X.-X.; Wang, W.-B.; Zhang, W.-G.; He, L.; Chang, X.-Y.; Duan, L.-M.
2017-10-01
We experimentally demonstrate room-temperature storage of quantum entanglement using two nuclear spins weakly coupled to the electronic spin carried by a single nitrogen-vacancy center in diamond. We realize universal quantum gate control over the three-qubit spin system and produce entangled states in the decoherence-free subspace of the two nuclear spins. By injecting arbitrary collective noise, we demonstrate that the decoherence-free entangled state has coherence time longer than that of other entangled states by an order of magnitude in our experiment.
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
NASA Astrophysics Data System (ADS)
Aster, R. C.; McMahon, N. D.; Myers, E. K.; Lough, A. C.
2015-12-01
Lough et al. (2014) first detected deep sub-icecap magmatic events beneath the Executive Committee Range volcanoes of Marie Byrd Land. Here, we extend the identification and analysis of these events in space and time utilizing subspace detection. Subspace detectors provide a highly effective methodology for studying events within seismic swarms that have similar moment tensor and Green's function characteristics and are particularly effective for identifying low signal-to-noise events. Marie Byrd Land (MBL) is an extremely remote continental region that is nearly completely covered by the West Antarctic Ice Sheet (WAIS). The southern extent of Marie Byrd Land lies within the West Antarctic Rift System (WARS), which includes the volcanic Executive Committee Range (ECR). The ECR shows north-to-south progression of volcanism across the WARS during the Holocene. In 2013, the POLENET/ANET seismic data identified two swarms of seismic activity in 2010 and 2011. These events have been interpreted as deep, long-period (DLP) earthquakes based on depth (25-40 km) and low frequency content. The DLP events in MBL lie beneath an inferred sub-WAIS volcanic edifice imaged with ice penetrating radar and have been interpreted as a present location of magmatic intrusion. The magmatic swarm activity in MBL provides a promising target for advanced subspace detection and temporal, spatial, and event size analysis of an extensive deep long period earthquake swarm using a remote seismographic network. We utilized a catalog of 1,370 traditionally identified DLP events to construct subspace detectors for the six nearest stations and analyzed two years of data spanning 2010-2011. Association of these detections into events resulted in an approximate ten-fold increase in number of locatable earthquakes. In addition to the two previously identified swarms during early 2010 and early 2011, we find sustained activity throughout the two years of study that includes several previously unidentified periods of heightened activity. Correlation with large global earthquakes suggests that the DLP activity is not sensitive to remote teleseismic triggering.
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.
An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.
Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui
2016-09-22
The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.
Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.
2017-09-01
Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.
Using task dynamics to quantify the affordances of throwing for long distance and accuracy.
Wilson, Andrew D; Weightman, Andrew; Bingham, Geoffrey P; Zhu, Qin
2016-07-01
In 2 experiments, the current study explored how affordances structure throwing for long distance and accuracy. In Experiment 1, 10 expert throwers (from baseball, softball, and cricket) threw regulation tennis balls to hit a vertically oriented 4 ft × 4 ft target placed at each of 9 locations (3 distances × 3 heights). We measured their release parameters (angle, speed, and height) and showed that they scaled their throws in response to changes in the target's location. We then simulated the projectile motion of the ball and identified a continuous subspace of release parameters that produce hits to each target location. Each subspace describes the affordance of our target to be hit by a tennis ball moving in a projectile motion to the relevant location. The simulated affordance spaces showed how the release parameter combinations required for hits changed with changes in the target location. The experts tracked these changes in their performance and were successful in hitting the targets. We next tested unusual (horizontal) targets that generated correspondingly different affordance subspaces to determine whether the experts would track the affordance to generate successful hits. Do the experts perceive the affordance? They do. In Experiment 2, 5 cricketers threw to hit either vertically or horizontally oriented targets and successfully hit both, exhibiting release parameters located within the requisite affordance subspaces. We advocate a task dynamical approach to the study of affordances as properties of objects and events in the context of tasks as the future of research in this area. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...
2016-10-21
The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less
Hidden discriminative features extraction for supervised high-order time series modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2016-11-01
In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Hamid; Murugkar, Sangeeta; Ahmad, Abrar
Purpose: To improve classification by reducing batch effect in samples from the ovarian carcinoma cell lines A2780s (parental wild type) and A2780cp (cisplatin cross-radio-resistant), before, right after, and 24 hours after irradiation to 10Gy. Methods: Spectra were acquired with a home built confocal Raman microscope in 3 distinct runs of six samples: unirradiated s&cp (control pair), then 0h and 24h after irradiation. The Raman spectra were noise reduced, then background subtracted with SMIRF algorithm. ∼35 cell spectra were collected from each sample in 1024 channels from 700cm-1 to 1618cm-1. The spectra were analyzed by regularized multiclass LDA. For feature reductionmore » the spectra were grouped into 3 overlapping group pairs: s-cp, 0Gy–10Gy0h and 0Gy10–Gy24h. The three features, the three differences of the mean spectra were mapped to the analysis sub-space by the inverse regularized covariance matrix. The batch effect noticeably confounded the dose and time effect. Results: To remove the batch effect, the 2+2=4D subspace extended by the covariance matrix of the means of the 0Gy control groups was subtracted from the spectra of each sample. Repeating the analysis on the spectra with the control group variability removed, the batch effect was dramatically reduced in the dose and time directions enabling sharp linear discrimination. The cell type classification also improved. Conclusions: We identified a efficient batch effect removal technique crucial to the applicability of Raman microscopy to radiosensitivity studies both on cell cultures and potential clinical diagnostic applications.« less
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-01-01
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431
Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun
2017-04-27
This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.
Correlational Neural Networks.
Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman
2016-02-01
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Multi-subject subspace alignment for non-stationary EEG-based emotion recognition.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Liu, Xin; Liu, Dan; Bai, Ou
2018-01-01
Emotion recognition based on EEG signals is a critical component in Human-Machine collaborative environments and psychiatric health diagnoses. However, EEG patterns have been found to vary across subjects due to user fatigue, different electrode placements, and varying impedances, etc. This problem renders the performance of EEG-based emotion recognition highly specific to subjects, requiring time-consuming individual calibration sessions to adapt an emotion recognition system to new subjects. Recently, domain adaptation (DA) strategies have achieved a great deal success in dealing with inter-subject adaptation. However, most of them can only adapt one subject to another subject, which limits their applicability in real-world scenarios. To alleviate this issue, a novel unsupervised DA strategy called Multi-Subject Subspace Alignment (MSSA) is proposed in this paper, which takes advantage of subspace alignment solution and multi-subject information in a unified framework to build personalized models without user-specific labeled data. Experiments on a public EEG dataset known as SEED verify the effectiveness and superiority of MSSA over other state of the art methods for dealing with multi-subject scenarios.
Semi-Supervised Projective Non-Negative Matrix Factorization for Cancer Classification.
Zhang, Xiang; Guan, Naiyang; Jia, Zhilong; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Advances in DNA microarray technologies have made gene expression profiles a significant candidate in identifying different types of cancers. Traditional learning-based cancer identification methods utilize labeled samples to train a classifier, but they are inconvenient for practical application because labels are quite expensive in the clinical cancer research community. This paper proposes a semi-supervised projective non-negative matrix factorization method (Semi-PNMF) to learn an effective classifier from both labeled and unlabeled samples, thus boosting subsequent cancer classification performance. In particular, Semi-PNMF jointly learns a non-negative subspace from concatenated labeled and unlabeled samples and indicates classes by the positions of the maximum entries of their coefficients. Because Semi-PNMF incorporates statistical information from the large volume of unlabeled samples in the learned subspace, it can learn more representative subspaces and boost classification performance. We developed a multiplicative update rule (MUR) to optimize Semi-PNMF and proved its convergence. The experimental results of cancer classification for two multiclass cancer gene expression profile datasets show that Semi-PNMF outperforms the representative methods.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Subspace Clustering via Learning an Adaptive Low-Rank Graph.
Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin
2018-08-01
By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less
NASA Astrophysics Data System (ADS)
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
MODAL TRACKING of A Structural Device: A Subspace Identification Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J. V.; Franco, S. N.; Ruggiero, E. L.
Mechanical devices operating in an environment contaminated by noise, uncertainties, and extraneous disturbances lead to low signal-to-noise-ratios creating an extremely challenging processing problem. To detect/classify a device subsystem from noisy data, it is necessary to identify unique signatures or particular features. An obvious feature would be resonant (modal) frequencies emitted during its normal operation. In this report, we discuss a model-based approach to incorporate these physical features into a dynamic structure that can be used for such an identification. The approach we take after pre-processing the raw vibration data and removing any extraneous disturbances is to obtain a representation ofmore » the structurally unknown device along with its subsystems that capture these salient features. One approach is to recognize that unique modal frequencies (sinusoidal lines) appear in the estimated power spectrum that are solely characteristic of the device under investigation. Therefore, the objective of this effort is based on constructing a black box model of the device that captures these physical features that can be exploited to “diagnose” whether or not the particular device subsystem (track/detect/classify) is operating normally from noisy vibrational data. Here we discuss the application of a modern system identification approach based on stochastic subspace realization techniques capable of both (1) identifying the underlying black-box structure thereby enabling the extraction of structural modes that can be used for analysis and modal tracking as well as (2) indicators of condition and possible changes from normal operation.« less
NASA Astrophysics Data System (ADS)
Liu, Jun; Dong, Ping; Zhou, Jian; Cao, Zhuo-Liang
2017-05-01
A scheme for implementing the non-adiabatic holonomic quantum computation in decoherence-free subspaces is proposed with the interactions between a microcavity and quantum dots. A universal set of quantum gates can be constructed on the encoded logical qubits with high fidelities. The current scheme can suppress both local and collective noises, which is very important for achieving universal quantum computation. Discussions about the gate fidelities with the experimental parameters show that our schemes can be implemented in current experimental technology. Therefore, our scenario offers a method for universal and robust solid-state quantum computation.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
Subspace techniques to remove artifacts from EEG: a quantitative analysis.
Teixeira, A R; Tome, A M; Lang, E W; Martins da Silva, A
2008-01-01
In this work we discuss and apply projective subspace techniques to both multichannel as well as single channel recordings. The single-channel approach is based on singular spectrum analysis(SSA) and the multichannel approach uses the extended infomax algorithm which is implemented in the opensource toolbox EEGLAB. Both approaches will be evaluated using artificial mixtures of a set of selected EEG signals. The latter were selected visually to contain as the dominant activity one of the characteristic bands of an electroencephalogram (EEG). The evaluation is performed both in the time and frequency domain by using correlation coefficients and coherence function, respectively.
NASA Astrophysics Data System (ADS)
Androsov, Alexey; Nerger, Lars; Schnur, Reiner; Schröter, Jens; Albertella, Alberta; Rummel, Reiner; Savcenko, Roman; Bosch, Wolfgang; Skachko, Sergey; Danilov, Sergey
2018-05-01
General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
NASA Astrophysics Data System (ADS)
Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.
2001-02-01
We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.
NASA Astrophysics Data System (ADS)
Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim
2017-03-01
The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan
2017-09-01
It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.
The Role of High-Dimensional Diffusive Search, Stabilization, and Frustration in Protein Folding
Rimratchada, Supreecha; McLeish, Tom C.B.; Radford, Sheena E.; Paci, Emanuele
2014-01-01
Proteins are polymeric molecules with many degrees of conformational freedom whose internal energetic interactions are typically screened to small distances. Therefore, in the high-dimensional conformation space of a protein, the energy landscape is locally relatively flat, in contrast to low-dimensional representations, where, because of the induced entropic contribution to the full free energy, it appears funnel-like. Proteins explore the conformation space by searching these flat subspaces to find a narrow energetic alley that we call a hypergutter and then explore the next, lower-dimensional, subspace. Such a framework provides an effective representation of the energy landscape and folding kinetics that does justice to the essential characteristic of high-dimensionality of the search-space. It also illuminates the important role of nonnative interactions in defining folding pathways. This principle is here illustrated using a coarse-grained model of a family of three-helix bundle proteins whose conformations, once secondary structure has formed, can be defined by six rotational degrees of freedom. Two folding mechanisms are possible, one of which involves an intermediate. The stabilization of intermediate subspaces (or states in low-dimensional projection) in protein folding can either speed up or slow down the folding rate depending on the amount of native and nonnative contacts made in those subspaces. The folding rate increases due to reduced-dimension pathways arising from the mere presence of intermediate states, but decreases if the contacts in the intermediate are very stable and introduce sizeable topological or energetic frustration that needs to be overcome. Remarkably, the hypergutter framework, although depending on just a few physically meaningful parameters, can reproduce all the types of experimentally observed curvature in chevron plots for realizations of this fold. PMID:24739172
NASA Astrophysics Data System (ADS)
Morton, E.; Bilek, S. L.; Rowe, C. A.
2017-12-01
Understanding the spatial extent and behavior of the interplate contact in the Cascadia Subduction Zone (CSZ) may prove pivotal to preparation for future great earthquakes, such as the M9 event of 1700. Current and historic seismic catalogs are limited in their integrity by their short duration, given the recurrence rate of great earthquakes, and by their rather high magnitude of completeness for the interplate seismic zone, due to its offshore distance from these land-based networks. This issue is addressed via the 2011-2015 Cascadia Initiative (CI) amphibious seismic array deployment, which combined coastal land seismometers with more than 60 ocean-bottom seismometers (OBS) situated directly above the presumed plate interface. We search the CI dataset for small, previously undetected interplate earthquakes to identify seismic patches on the megathrust. Using the automated subspace detection method, we search for previously undetected events. Our subspace comprises eigenvectors derived from CI OBS and on-land waveforms extracted for existing catalog events that appear to have occurred on the plate interface. Previous work focused on analysis of two repeating event clusters off the coast of Oregon spanning all 4 years of deployment. Here we expand earlier results to include detection and location analysis to the entire CSZ margin during the first year of CI deployment, with more than 200 new events detected for the central portion of the margin. Template events used for subspace scanning primarily occurred beneath the land surface along the coast, at the downdip edge of modeled high slip patches for the 1700 event, with most concentrated at the northwestern edge of the Olympic Peninsula.
NASA Astrophysics Data System (ADS)
Živković, Tomislav P.
1984-09-01
The configuration interaction (CI) space Xn built upon n electrons moving over 2n orthonormalized orbitals χi is considered. It is shown that the space Xn splits into two complementary subspaces X+n and X-n having special properties: each state Ψ+∈X+n and Ψ-∈X-n is ``alternantlike'' in the sense that it has a uniform charge density distribution over all orbitals χi and vanishing bond-orders between all orbitals of the same parity. In addition, matrix elements Γ(ij;kl) of a two-particle density matrix vanish whenever four distinct orbitals are involved and there is an odd number of orbitals of the same parity. Further, Γ(ij;lj)=γ(il)/4 ( j≠i,l), whenever (i) and (l) are of different parity. This last relation shows the connection between a two-particle (Γ) and a one-particle (γ) density matrix. ``Elementary'' alternant and antialternant operators are identified. These operators connect either only the states in the same subspace, or only the states in different subspaces, and each one- and two-particle symmetric operator can be represented by their linear combination. Alternant Hamiltonians, which can be represented as linear combinations of elementary alternant operators, have alternantlike eigenstates. It is also shown that each symmetric Hamiltonian possessing alternantlike eigenstates can be represented as such a linear combination. In particular, the PPP Hamiltonian describing an alternant hydrocarbon system is such a case. Complementary subspaces X+n and X-n can be explicitly constructed using the so-called regular resonance structures (RRS's) which are normalized determinants containing mutually disjunct bond orbitals. Expressions for the derivation of matrix elements of one- and two-particle operators between different RRS's are also derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dul, F.A.; Arczewski, K.
1994-03-01
Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei
2018-05-01
To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Churchill, Nathan W; Strother, Stephen C
2013-11-15
The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models. Copyright © 2013 Elsevier Inc. All rights reserved.
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
Bell-correlated activable bound entanglement in multiqubit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandyopadhyay, Somshubhro; Chattopadhyay, Indrani; Roychowdhury, Vwani
2005-06-15
We show that the Hilbert space of even number ({>=}4) of qubits can always be decomposed as a direct sum of four orthogonal subspaces such that the normalized projectors onto the subspaces are activable bound entangled (ABE) states. These states also show a surprising recursive relation in the sense that the states belonging to 2N+2 qubits are Bell correlated to the states of 2N qubits; hence, we refer to these states as Bell-correlated ABE (BCABE) states. We also study the properties of noisy BCABE states and show that they are very similar to that of two qubit Bell-diagonal states.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Hoynck, Michael
2005-01-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Höynck, Michael
2004-12-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Resta, Ferruccio; Borroni, Massimo; Cazzulani, Gabriele
2014-04-01
A new method for the real-time identification of mechanical system modal parameters is used in order to design different adaptive control logics aiming to reduce the vibrations in a carbon fiber plate smart structure. It is instrumented with three piezoelectric actuators, three accelerometers and three strain gauges. The real-time identification is based on a recursive subspace tracking algorithm whose outputs are elaborated by an ARMA model. A statistical approach is finally applied to choose the modal parameter correct values. These are given in input to model-based control logics such as a gain scheduling and an adaptive LQR control.
NASA Astrophysics Data System (ADS)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Hierarchical Discriminant Analysis.
Lu, Di; Ding, Chuntao; Xu, Jinliang; Wang, Shangguang
2018-01-18
The Internet of Things (IoT) generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification) is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA). It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
2016-05-25
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
NASA Astrophysics Data System (ADS)
Sciotto, M.; Rowe, C. A.; Cannata, A.; Arrowsmith, S.; Privitera, E.; Gresta, S.
2011-12-01
The current eruption of Mount Etna, which began in January, 2011, has produced numerous energetic episodes of lava fountaining, which have bee recorded by the INGV seismic and acoustic sensors located on and around the volcano. The source of these events was the pit crater on the east flank of the Southeast crater of Etna. Simultaneously, small levels of activity were noted in the Bocca Nuova as well, prior to its lava fountaining activity. We will present an analysis of seismic and acoustic signals related to the 2011 activity wherein we apply the method of subspace detection to determine whether the source exhibits a temporal evolution within or between fountaining events, or otherwise produces repeating, classifiable events occurring through the continuous explosive degassing. We will examine not only the raw waveforms, but also spectral variations in time as well as time-varying statistical functions such as signal skewness and kurtosis. These results will be compared to straightforward cross-correlation analysis. In addition to classification performance, the subspace method has promise to outperform standard STA/LTA methods for real-time event detection in cases where similar events can be expected.
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
NASA Astrophysics Data System (ADS)
Kenefic, L.; Morton, E.; Bilek, S.
2017-12-01
It is well known that subduction zones create the largest earthquakes in the world, like the magnitude 9.5 Chile earthquake in 1960, or the more recent 9.1 magnitude Japan earthquake in 2011, both of which are in the top five largest earthquakes ever recorded. However, off the coast of the Pacific Northwest region of the U.S., the Cascadia subduction zone (CSZ) remains relatively quiet and modern seismic instruments have not recorded earthquakes of this size in the CSZ. The last great earthquake, a magnitude 8.7-9.2, occurred in 1700 and is constrained by written reports of the resultant tsunami in Japan and dating a drowned forest in the U.S. Previous studies have suggested the margin is most likely segmented along-strike. However, variations in frictional conditions in the CSZ fault zone are not well known. Geodetic modeling indicates that the locked seismogenic zone is likely completely offshore, which may be too far from land seismometers to adequately detect related seismicity. Ocean bottom seismometers, as part of the Cascadia Initiative Amphibious Network, were installed directly above the inferred seismogenic zone, which we use to better detect small interplate seismicity. Using the subspace detection method, this study looks to find new seismogenic zone earthquakes. This subspace detection method uses multiple previously known event templates concurrently to scan through continuous seismic data. Template events that make up the subspace are chosen from events in existing catalogs that likely occurred along the plate interface. Corresponding waveforms are windowed on the nearby Cascadia Initiative ocean bottom seismometers and coastal land seismometers for scanning. Detections that are found by the scan are similar to the template waveforms based upon a predefined threshold. Detections are then visually examined to determine if an event is present. The presence of repeating event clusters can indicate persistent seismic patches, likely corresponding to areas of stronger coupling. This work will ultimately improve the understanding of CSZ fault zone heterogeneity. Preliminary results gathered indicate 96 possible new events between August 2, 2013 and July 1, 2014 for four target clusters off the coast of northern Oregon.
NASA Astrophysics Data System (ADS)
Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter
2018-03-01
An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Data-Driven Modeling of Complex Systems by means of a Dynamical ANN
NASA Astrophysics Data System (ADS)
Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.
2017-12-01
The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
Re-Sonification of Objects, Events, and Environments
NASA Astrophysics Data System (ADS)
Fink, Alex M.
Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.
Matrix Product Operator Simulations of Quantum Algorithms
2015-02-01
parallel to the Grover subspace parametrically: (Zi|φ〉)‖ = s cos γ|α〉+ s sin γ|β〉, s = √ a(k)2 (N − 1)2 + b(k)2, γ = tan −1 ( b(k)(N − 1) a(k) ) (6.32) Each...of this vector parallel to the Grover subspace in parametric form: (XiZi|φ〉)‖ = s cos(γ)|α〉+ s sin(γ)|β〉, s = 1√ N − 1 , γ = tan −1 ( cot (( k + 1 2 ) θ...quant- ph/0001106, 2000. Bibliography 146 [30] Jérémie Roland and Nicolas J Cerf. Quantum search by local adiabatic evolution. Physical Review A, 65(4
Dominant modal decomposition method
NASA Astrophysics Data System (ADS)
Dombovari, Zoltan
2017-03-01
The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.
Investigating phonon-mediated interactions with polar molecules
NASA Astrophysics Data System (ADS)
Sous, John; Madison, Kirk; Berciu, Mona; Krems, Roman
2017-04-01
We show that an ensemble of polar molecules in an optical lattice realizes the Peierls polaron model for hard-core particles/ pseudospins. We analyze the quasiparticle spectrum in the one-particle subspace, the two-particle subspace and at finite concentrations. We derive an effective model that describes the low-energy behavior of the system. We show that the Hamiltonian includes phonon-mediated repulsions and phonon-mediated ``pair-hopping'' terms which move the particle pair as a whole. We show that microwave excitations of the system exhibit signatures of these interactions. These results pave the way for the experimental observation of phonon-mediated repulsion. This work was supported by NSERC of Canada and the Stewart Blusson Quantum Matter Institute.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
NASA Astrophysics Data System (ADS)
Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina
2016-06-01
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.
Collaboration space division in collaborative product development based on a genetic algorithm
NASA Astrophysics Data System (ADS)
Qian, Xueming; Ma, Yanqiao; Feng, Huan
2018-02-01
The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
In many military and homeland security persistent surveillance applications, accurate detection of different skin colors in varying observability and illumination conditions is a valuable capability for video analytics. One of those applications is In-Vehicle Group Activity (IVGA) recognition, in which significant changes in observability and illumination may occur during the course of a specific human group activity of interest. Most of the existing skin color detection algorithms, however, are unable to perform satisfactorily in confined operational spaces with partial observability and occultation, as well as under diverse and changing levels of illumination intensity, reflection, and diffraction. In this paper, we investigate the salient features of ten popular color spaces for skin subspace color modeling. More specifically, we examine the advantages and disadvantages of each of these color spaces, as well as the stability and suitability of their features in differentiating skin colors under various illumination conditions. The salient features of different color subspaces are methodically discussed and graphically presented. Furthermore, we present robust and adaptive algorithms for skin color detection based on this analysis. Through examples, we demonstrate the efficiency and effectiveness of these new color skin detection algorithms and discuss their applicability for skin detection in IVGA recognition applications.
Gene selection for microarray data classification via subspace learning and manifold regularization.
Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui
2017-12-19
With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.
Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina
2016-06-21
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
Casimir Effect in Hemisphere Capped Tubes
NASA Astrophysics Data System (ADS)
Bezerra de Mello, E. R.; Saharian, A. A.
2016-02-01
In this paper we investigate the vacuum densities for a massive scalar field with general curvature coupling in background of a (2 + 1)-dimensional spacetime corresponding to a cylindrical tube with a hemispherical cap. A complete set of mode functions is constructed and the positive-frequency Wightman function is evaluated for both the cylindrical and hemispherical subspaces. On the base of this, the vacuum expectation values of the field squared and energy-momentum tensor are investigated. The mean field squared and the normal stress are finite on the boundary separating two subspaces, whereas the energy density and the parallel stress diverge as the inverse power of the distance from the boundary. For a conformally coupled field, the vacuum energy density is negative on the cylindrical part of the space. On the hemisphere, it is negative near the top and positive close to the boundary. In the case of minimal coupling the energy density on the cup is negative. On the tube it is positive near the boundary and negative at large distances. Though the geometries of the subspaces are different, the Casimir pressures on the separate sides of the boundary are equal and the net Casimir force vanishes. The results obtained may be applied to capped carbon nanotubes described by an effective field theory in the long-wavelength approximation.
Position, Location, Place and Area: AN Indoor Perspective
NASA Astrophysics Data System (ADS)
Sithole, George; Zlatanova, Sisi
2016-06-01
Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation. Users experience indoor environments differently. For this reason navigational models have to be designed to adapt to a user's personality, and to reflect as many cognitive maps as possible. This paper presents an extension of a previously proposed framework. In this extension the notion of placement is accounted for, thereby enabling one aspect of the `personalised indoor experience'. In the paper, firstly referential expressions are used as a tool to discuss the different ways of thinking of placement within indoor spaces. Next, placement is expressed in terms of the concept of Position, Location, Place and Area. Finally, the previously proposed framework is extended to include these concepts of placement. An example is provided of the use of the extended framework. Notable characteristics of the framework are: (1) Sub-spaces, resources and agents can simultaneously possess different types of placement, e.g., a person in a room can have an xyz position and a location defined by the room number. While these entities can simultaneously have different forms of placement, only one is dominant. (2) Sub-spaces, resources and agents are capable of possessing modifiers that alter their access and usage. (3) Sub-spaces inherit the modifiers of the resources or agents contained in them. (4) Unlike conventional navigational models which treat resources and obstacles as different types of entities, in the proposed framework there are only resources and whether a resource is an obstacle is determined by a modifier that determines whether a user can access the resource. The power of the framework is that it blends the geometry and topology of space, the influence of human activity within sub-spaces together with the different notions of placement in a way that is simple and yet very flexible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Ichitaro; Wu, Kesheng; Simon, Horst
2008-10-27
The original software package TRLan, [TRLan User Guide], page 24, implements the thick restart Lanczos method, [Wu and Simon 2001], page 24, for computing eigenvalues {lambda} and their corresponding eigenvectors v of a symmetric matrix A: Av = {lambda}v. Its effectiveness in computing the exterior eigenvalues of a large matrix has been demonstrated, [LBNL-42982], page 24. However, its performance strongly depends on the user-specified dimension of a projection subspace. If the dimension is too small, TRLan suffers from slow convergence. If it is too large, the computational and memory costs become expensive. Therefore, to balance the solution convergence and costs,more » users must select an appropriate subspace dimension for each eigenvalue problem at hand. To free users from this difficult task, nu-TRLan, [LNBL-1059E], page 23, adjusts the subspace dimension at every restart such that optimal performance in solving the eigenvalue problem is automatically obtained. This document provides a user guide to the nu-TRLan software package. The original TRLan software package was implemented in Fortran 90 to solve symmetric eigenvalue problems using static projection subspace dimensions. nu-TRLan was developed in C and extended to solve Hermitian eigenvalue problems. It can be invoked using either a static or an adaptive subspace dimension. In order to simplify its use for TRLan users, nu-TRLan has interfaces and features similar to those of TRLan: (1) Solver parameters are stored in a single data structure called trl-info, Chapter 4 [trl-info structure], page 7. (2) Most of the numerical computations are performed by BLAS, [BLAS], page 23, and LAPACK, [LAPACK], page 23, subroutines, which allow nu-TRLan to achieve optimized performance across a wide range of platforms. (3) To solve eigenvalue problems on distributed memory systems, the message passing interface (MPI), [MPI forum], page 23, is used. The rest of this document is organized as follows. In Chapter 2 [Installation], page 2, we provide an installation guide of the nu-TRLan software package. In Chapter 3 [Example], page 3, we present a simple nu-TRLan example program. In Chapter 4 [trl-info structure], page 7, and Chapter 5 [trlan subroutine], page 14, we describe the solver parameters and interfaces in detail. In Chapter 6 [Solver parameters], page 21, we discuss the selection of the user-specified parameters. In Chapter 7 [Contact information], page 22, we give the acknowledgements and contact information of the authors. In Chapter 8 [References], page 23, we list reference to related works.« less
Non-commuting two-local Hamiltonians for quantum error suppression
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor G.
2017-04-01
Physical constraints make it challenging to implement and control many-body interactions. For this reason, designing quantum information processes with Hamiltonians consisting of only one- and two-local terms is a worthwhile challenge. Enabling error suppression with two-local Hamiltonians is particularly challenging. A no-go theorem of Marvian and Lidar (Phys Rev Lett 113(26):260504, 2014) demonstrates that, even allowing particles with high Hilbert space dimension, it is impossible to protect quantum information from single-site errors by encoding in the ground subspace of any Hamiltonian containing only commuting two-local terms. Here, we get around this no-go result by encoding in the ground subspace of a Hamiltonian consisting of non-commuting two-local terms arising from the gauge operators of a subsystem code. Specifically, we show how to protect stored quantum information against single-qubit errors using a Hamiltonian consisting of sums of the gauge generators from Bacon-Shor codes (Bacon in Phys Rev A 73(1):012340, 2006) and generalized-Bacon-Shor code (Bravyi in Phys Rev A 83(1):012320, 2011). Our results imply that non-commuting two-local Hamiltonians have more error-suppressing power than commuting two-local Hamiltonians. While far from providing full fault tolerance, this approach improves the robustness achievable in near-term implementable quantum storage and adiabatic quantum computations, reducing the number of higher-order terms required to encode commonly used adiabatic Hamiltonians such as the Ising Hamiltonians common in adiabatic quantum optimization and quantum annealing.
NASA Astrophysics Data System (ADS)
Kirschner, Matthias; Wesarg, Stefan
2011-03-01
Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym
A Variational Approach to Video Registration with Subspace Constraints.
Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes
2013-01-01
This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.
Theseus: tethered distributed robotics (TDR)
NASA Astrophysics Data System (ADS)
Digney, Bruce L.; Penzes, Steven G.
2003-09-01
The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.
Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Bigdely-Shamlo, Nima; Mullen, Tim; Kreutz-Delgado, Kenneth; Makeig, Scott
2013-01-01
A crucial question for the analysis of multi-subject and/or multi-session electroencephalographic (EEG) data is how to combine information across multiple recordings from different subjects and/or sessions, each associated with its own set of source processes and scalp projections. Here we introduce a novel statistical method for characterizing the spatial consistency of EEG dynamics across a set of data records. Measure Projection Analysis (MPA) first finds voxels in a common template brain space at which a given dynamic measure is consistent across nearby source locations, then computes local-mean EEG measure values for this voxel subspace using a statistical model of source localization error and between-subject anatomical variation. Finally, clustering the mean measure voxel values in this locally consistent brain subspace finds brain spatial domains exhibiting distinguishable measure features and provides 3-D maps plus statistical significance estimates for each EEG measure of interest. Applied to sufficient high-quality data, the scalp projections of many maximally independent component (IC) processes contributing to recorded high-density EEG data closely match the projection of a single equivalent dipole located in or near brain cortex. We demonstrate the application of MPA to a multi-subject EEG study decomposed using independent component analysis (ICA), compare the results to k-means IC clustering in EEGLAB (sccn.ucsd.edu/eeglab), and use surrogate data to test MPA robustness. A Measure Projection Toolbox (MPT) plug-in for EEGLAB is available for download (sccn.ucsd.edu/wiki/MPT). Together, MPA and ICA allow use of EEG as a 3-D cortical imaging modality with near-cm scale spatial resolution. PMID:23370059
Imaging of heart acoustic based on the sub-space methods using a microphone array.
Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo
2017-07-01
Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.
Maximum projection designs for computer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan; Gul, Evren; Ba, Shan
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Maximum projection designs for computer experiments
Joseph, V. Roshan; Gul, Evren; Ba, Shan
2015-03-18
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Time-reversal optical tomography: detecting and locating extended targets in a turbid medium
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Xu, M.; Gayen, S. K.
2012-03-01
Time Reversal Optical Tomography (TROT) is developed to locate extended target(s) in a highly scattering turbid medium, and estimate their optical strength and size. The approach uses Diffusion Approximation of Radiative Transfer Equation for light propagation along with Time Reversal (TR) Multiple Signal Classification (MUSIC) scheme for signal and noise subspaces for assessment of target location. A MUSIC pseudo spectrum is calculated using the eigenvectors of the TR matrix T, whose poles provide target locations. Based on the pseudo spectrum contours, retrieval of target size is modeled as an optimization problem, using a "local contour" method. The eigenvalues of T are related to optical strengths of targets. The efficacy of TROT to obtain location, size, and optical strength of one absorptive target, one scattering target, and two absorptive targets, all for different noise levels was tested using simulated data. Target locations were always accurately determined. Error in optical strength estimates was small even at 20% noise level. Target size and shape were more sensitive to noise. Results from simulated data demonstrate high potential for application of TROT in practical biomedical imaging applications.
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
Ground-state and Thermodynamic Properties of an S = 1 Kitaev Model
NASA Astrophysics Data System (ADS)
Koga, Akihisa; Tomishige, Hiroyuki; Nasu, Joji
2018-06-01
We study the ground-state and thermodynamic properties of an S = 1 Kitaev model. We first clarify the existence of global parity symmetry in addition to the local symmetry on each plaquette, which enables us to perform large-scale calculations on up to 24 sites. It is found that the ground state should be singlet, and its energy is estimated as E/N ˜ -0.65J, where J is the Kitaev exchange coupling. We find that the lowest excited state belongs to the same subspace as the ground state, and that the gap decreases monotonically with increasing system size, which suggests that the ground state of the S = 1 Kitaev model is gapless. Using the thermal pure quantum states, we clarify the finite temperature properties characteristic of the Kitaev models with S ≤ 2.
Hyperspectral image analysis for standoff trace detection using IR laser spectroscopy
NASA Astrophysics Data System (ADS)
Jarvis, J.; Fuchs, F.; Hugger, S.; Ostendorf, R.; Butschek, L.; Yang, Q.; Dreyhaupt, A.; Grahmann, J.; Wagner, J.
2016-05-01
In the recent past infrared laser backscattering spectroscopy using Quantum Cascade Lasers (QCL) emitting in the molecular fingerprint region between 7.5 μm and 10 μm proved a highly promising approach for stand-off detection of dangerous substances. In this work we present an active illumination hyperspectral image sensor, utilizing QCLs as spectral selective illumination sources. A high performance Mercury Cadmium Telluride (MCT) imager is used for collection of the diffusely backscattered light. Well known target detection algorithms like the Adaptive Matched Subspace Detector and the Adaptive Coherent Estimator are used to detect pixel vectors in the recorded hyperspectral image that contain traces of explosive substances like PETN, RDX or TNT. In addition we present an extension of the backscattering spectroscopy technique towards real-time detection using a MOEMS EC-QCL.
NASA Astrophysics Data System (ADS)
Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza
2015-09-01
GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thallmair, Sebastian; Lehrstuhl für BioMolekulare Optik, Ludwig-Maximilians-Universität München, D-80538 München; Roos, Matthias K.
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstratedmore » for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.« less
Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Parish, Eric; Duraisamy, Karthk
2017-11-01
The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.
Low-Rank Discriminant Embedding for Multiview Learning.
Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke
2017-11-01
This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grosso, Marcos; Kalstein, Adrian; Parisi, Gustavo
The native state of a protein consists of an equilibrium of conformational states on an energy landscape rather than existing as a single static state. The co-existence of conformers with different ligand-affinities in a dynamical equilibrium is the basis for the conformational selection model for ligand binding. In this context, the development of theoretical methods that allow us to analyze not only the structural changes but also changes in the fluctuation patterns between conformers will contribute to elucidate the differential properties acquired upon ligand binding. Molecular dynamics simulations can provide the required information to explore these features. Its use inmore » combination with subsequent essential dynamics analysis allows separating large concerted conformational rearrangements from irrelevant fluctuations. We present a novel procedure to define the size and composition of essential dynamics subspaces associated with ligand-bound and ligand-free conformations. These definitions allow us to compare essential dynamics subspaces between different conformers. Our procedure attempts to emphasize the main similarities and differences between the different essential dynamics in an unbiased way. Essential dynamics subspaces associated to conformational transitions can also be analyzed. As a test case, we study the glutaminase interacting protein (GIP), composed of a single PDZ domain. Both GIP ligand-free state and glutaminase L peptide-bound states are analyzed. Our findings concerning the relative changes in the flexibility pattern upon binding are in good agreement with experimental Nuclear Magnetic Resonance data.« less
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C. L.; Funk, L. L.; Riedel, R. A.
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
Liu, Ya; Pan, Xianzhang; Wang, Changkun; Li, Yanli; Shi, Rongjie
2015-01-01
Robust models for predicting soil salinity that use visible and near-infrared (vis–NIR) reflectance spectroscopy are needed to better quantify soil salinity in agricultural fields. Currently available models are not sufficiently robust for variable soil moisture contents. Thus, we used external parameter orthogonalization (EPO), which effectively projects spectra onto the subspace orthogonal to unwanted variation, to remove the variations caused by an external factor, e.g., the influences of soil moisture on spectral reflectance. In this study, 570 spectra between 380 and 2400 nm were obtained from soils with various soil moisture contents and salt concentrations in the laboratory; 3 soil types × 10 salt concentrations × 19 soil moisture levels were used. To examine the effectiveness of EPO, we compared the partial least squares regression (PLSR) results established from spectra with and without EPO correction. The EPO method effectively removed the effects of moisture, and the accuracy and robustness of the soil salt contents (SSCs) prediction model, which was built using the EPO-corrected spectra under various soil moisture conditions, were significantly improved relative to the spectra without EPO correction. This study contributes to the removal of soil moisture effects from soil salinity estimations when using vis–NIR reflectance spectroscopy and can assist others in quantifying soil salinity in the future. PMID:26468645
Liu, Hesen; Zhu, Lin; Pan, Zhuohong; ...
2015-09-14
One of the main drawbacks of the existing oscillation damping controllers that are designed based on offline dynamic models is adaptivity to the power system operating condition. With the increasing availability of wide-area measurements and the rapid development of system identification techniques, it is possible to identify a measurement-based transfer function model online that can be used to tune the oscillation damping controller. Such a model could capture all dominant oscillation modes for adaptive and coordinated oscillation damping control. our paper describes a comprehensive approach to identify a low-order transfer function model of a power system using a multi-input multi-outputmore » (MIMO) autoregressive moving average exogenous (ARMAX) model. This methodology consists of five steps: 1) input selection; 2) output selection; 3) identification trigger; 4) model estimation; and 5) model validation. The proposed method is validated by using ambient data and ring-down data in the 16-machine 68-bus Northeast Power Coordinating Council system. Our results demonstrate that the measurement-based model using MIMO ARMAX can capture all the dominant oscillation modes. Compared with the MIMO subspace state space model, the MIMO ARMAX model has equivalent accuracy but lower order and improved computational efficiency. The proposed model can be applied for adaptive and coordinated oscillation damping control.« less
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
Multiscale deep features learning for land-use scene recognition
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin; Li, Ning
2018-01-01
The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.
Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki
2011-03-28
Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.
NASA Astrophysics Data System (ADS)
Haneishi, Hideaki; Sakuda, Yasunori; Honda, Toshio
2002-06-01
Spectral reflectance of most reflective objects such as natural objects and color hardcopy is relatively smooth and can be approximated by several numbers of principal components with high accuracy. Though the subspace spanned by those principal components represents a space in which reflective objects can exist, it dos not provide the bound in which the samples distribute. In this paper we propose to represent the gamut of reflective objects in more distinct form, i.e., as a polyhedron in the subspace spanned by several principal components. Concept of the polyhedral gamut representation and its application to calculation of metamer ensemble are described. Color-mismatch volume caused by different illuminant and/or observer for a metamer ensemble is also calculated and compared with theoretical one.
Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun
2015-09-01
This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.
Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2013-12-01
Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for identifying bad array elements through a jackknifing process to isolate the anomalous channels, so that an automated analysis system might discard them prior to FK analysis and beamforming on events of interest.
Analysis of Deep Long-Period Subglacial Seismicity in Marie Byrd Land, Antarctica
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Aster, R. C.; Myers, E. K.; Lough, A. C.
2017-12-01
We utilize subspace detection methodology to extend the detection and analysis of deep, long-period seismic activity associated with the subglacial and lower crust magmatic complex beneath the Executive Committee Range volcanoes of Marie Byrd Land (Lough et al., 2013). The Marie Byrd Land (MBL) volcanic province is a remote continental region that is almost completely covered by the West Antarctic Ice Sheet (WAIS). The southern extent of Marie Byrd Land lies within the West Antarctic Rift System (WARS), which includes the volcanic Executive Committee Range. Lough et al. noted that seismic stations in the POLENET/ANET seismic network detected two swarms of seismic activity during 2010 and 2011. These events have been interpreted as deep, long-period (DLP) earthquakes based on their depth (25-40 km), tectonic context, and low frequency spectra. The DLP events in MBL lie beneath an inferred volcanic edifice that is visible in ice penetrating radar images via subglacial topography and intraglacial ash deposits, and have been interpreted as a present location of Moho-proximal magmatic activity. The magmatic swarm activity in MBL provides a promising target for advanced subspace detection, and for the temporal, spatial, and event size analysis of an extensive deep long period earthquake swarm using a remote and sparse seismographic network. We utilized a catalog of 1370 traditionally identified DLP events to construct subspace detectors for the nine nearest stations using two years of data spanning 2010-2011. Via subspace detection we increase the number of observable detections more than 70 times at the highest signal to noise station while decreasing the overall minimum magnitude of completeness. In addition to the two previously identified swarms during early 2010 and early 2011, we find sustained activity throughout the two years of study that includes several previously unidentified periods of heightened activity. These events have a very high Gutenberg-Richter b-value (>2.0). We also note evidence of continuing seismicity through 2015 examining data from the small number of longer-running POLENET stations in the region.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Ponnapalli, Sri Priya; Saunders, Michael A.; Van Loan, Charles F.; Alter, Orly
2011-01-01
The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD) for N≥2 matrices , each with full column rank. Each matrix is exactly factored as Di = UiΣiVT, where V, identical in all factorizations, is obtained from the eigensystem SV = VΛ of the arithmetic mean S of all pairwise quotients of the matrices , i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λk≥1. Equality holds if and only if the corresponding eigenvector vk is a right basis vector of equal significance in all matrices Di and Dj, that is σi,k/σj,k = 1 for all i and j, and the corresponding left basis vector ui,k is orthogonal to all other vectors in Ui for all i. The eigenvalues λk = 1, therefore, define the “common HO GSVD subspace.” We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell-cycle peak times are correctly classified. PMID:22216090
NASA Astrophysics Data System (ADS)
Huynh, B. H.; Tjahjowidodo, T.; Zhong, Z.-W.; Wang, Y.; Srikanth, N.
2018-01-01
Vortex induced vibration based energy harvesting systems have gained interests in these recent years due to its potential as a low water current energy source. However, the effectiveness of the system is limited only at a certain water current due to the resonance principle that governs the concept. In order to extend the working range, a bistable spring to support the structure is introduced on the system. The improvement on the performance is essentially dependent on the bistable gap as one of the main parameters of the nonlinear spring. A sufficiently large bistable gap will result in a significant performance improvement. Unfortunately, a large bistable gap might also increase a chance of chaotic responses, which in turn will result in diminutive harvested power. To mitigate the problem, an appropriate control structure is required to stabilize the chaotic vibrations of a VIV energy converter with the bistable supporting structure. Based on the nature of the double-well potential energy in a bistable spring, the ideal control structure will attempt to drive the responses to inter-well periodic vibrations in order to maximize the harvested power. In this paper, the OGY control algorithm is designed and implemented to the system. The control strategy is selected since it requires only a small perturbation in a structural parameter to execute the control effort, thus, minimum power is needed to drive the control input. Facilitated by a wake oscillator model, the bistable VIV system is modelled as a 4-dimensional autonomous continuous-time dynamical system. To implement the controller strategy, the system is discretized at a period estimated from the subspace hyperplane intersecting to the chaotic trajectory, whereas the fixed points that correspond to the desired periodic orbits are estimated by the recurrence method. Simultaneously, the Jacobian and sensitivity matrices are estimated by the least square regression method. Based on the defined fixed point and the linearized model, the control gain matrix is calculated using the pole placement technique. The results show that the OGY controller is capable of stabilizing the chaotic responses by driving them to the desired inter-well period-one periodic vibrations and it is also shown that the harvested power is successfully improved. For validation purpose, a real-time experiment was carried out on a computer-based forced-feedback testing platform to validate the applicability of the controller in real-time applications. The experimental results confirm the feasibility of the controller to stabilize the responses.
Improved analysis of SP and CoSaMP under total perturbations
NASA Astrophysics Data System (ADS)
Li, Haifeng
2016-12-01
Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.
Locking of electron spin coherence above 20 ms in natural silicon carbide
NASA Astrophysics Data System (ADS)
Simin, D.; Kraus, H.; Sperlich, A.; Ohshima, T.; Astakhov, G. V.; Dyakonov, V.
2017-04-01
We demonstrate that silicon carbide (SiC) with a natural isotope abundance can preserve a coherent spin superposition in silicon vacancies over an unexpectedly long time exceeding 20 ms. The spin-locked subspace with a drastically reduced decoherence rate is attained through the suppression of heteronuclear spin crosstalking by applying a moderate magnetic field in combination with dynamic decoupling from the nuclear spin baths. Furthermore, we identify several phonon-assisted mechanisms of spin-lattice relaxation and find that it can be extremely long at cryogenic temperatures, equal to or even longer than 10 s. Our approach may be extended to other polyatomic compounds and opens a path towards improved qubit memory for wafer-scale quantum technologies.
On optimal improvements of classical iterative schemes for Z-matrices
NASA Astrophysics Data System (ADS)
Noutsos, D.; Tzoumas, M.
2006-04-01
Many researchers have considered preconditioners, applied to linear systems, whose matrix coefficient is a Z- or an M-matrix, that make the associated Jacobi and Gauss-Seidel methods converge asymptotically faster than the unpreconditioned ones. Such preconditioners are chosen so that they eliminate the off-diagonal elements of the same column or the elements of the first upper diagonal [Milaszewicz, LAA 93 (1987) 161-170], Gunawardena et al. [LAA 154-156 (1991) 123-143]. In this work we generalize the previous preconditioners to obtain optimal methods. "Good" Jacobi and Gauss-Seidel algorithms are given and preconditioners, that eliminate more than one entry per row, are also proposed and analyzed. Moreover, the behavior of the above preconditioners to the Krylov subspace methods is studied.
Chaos motion in robot manipulators
NASA Technical Reports Server (NTRS)
Lokshin, A.; Zak, M.
1987-01-01
It is shown that a simple two-link planar manipulator exhibits a phenomenon of global instability in a subspace of its configuration space. A numerical example, as well as results of a graphic simulation, is given.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Relaxation and decoherence of qubits encoded in collective states of engineered magnetic structures
NASA Astrophysics Data System (ADS)
Shakirov, Alexey M.; Rubtsov, Alexey N.; Lichtenstein, Alexander I.; Ribeiro, Pedro
2017-09-01
The quantum nature of a microscopic system can only be revealed when it is sufficiently decoupled from surroundings. Interactions with the environment induce relaxation and decoherence that turn the quantum state into a classical mixture. Here, we study the timescales of these processes for a qubit encoded in the collective state of a set of magnetic atoms deposited on a metallic surface. For that, we provide a generalization of the commonly used definitions of T1 and T2 characterizing relaxation and decoherence rates. We calculate these quantities for several atomic structures, including a collective spin, a setup implementing a decoherence-free subspace, and two examples of spin chains. Our work contributes to the comprehensive understanding of the relaxation and decoherence processes and shows the advantages of the implementation of a decoherence free subspace in these setups.
SU(p,q) coherent states and a Gaussian de Finetti theorem
NASA Astrophysics Data System (ADS)
Leverrier, Anthony
2018-04-01
We prove a generalization of the quantum de Finetti theorem when the local space is an infinite-dimensional Fock space. In particular, instead of considering the action of the permutation group on n copies of that space, we consider the action of the unitary group U(n) on the creation operators of the n modes and define a natural generalization of the symmetric subspace as the space of states invariant under unitaries in U(n). Our first result is a complete characterization of this subspace, which turns out to be spanned by a family of generalized coherent states related to the special unitary group SU(p, q) of signature (p, q). More precisely, this construction yields a unitary representation of the noncompact simple real Lie group SU(p, q). We therefore find a dual unitary representation of the pair of groups U(n) and SU(p, q) on an n(p + q)-mode Fock space. The (Gaussian) SU(p, q) coherent states resolve the identity on the symmetric subspace, which implies a Gaussian de Finetti theorem stating that tracing over a few modes of a unitary-invariant state yields a state close to a mixture of Gaussian states. As an application of this de Finetti theorem, we show that the n × n upper-left submatrix of an n × n Haar-invariant unitary matrix is close in total variation distance to a matrix of independent normal variables if n3 = O(m).
Block-localized wavefunction (BLW) method at the density functional theory (DFT) level.
Mo, Yirong; Song, Lingchun; Lin, Yuchun
2007-08-30
The block-localized wavefunction (BLW) approach is an ab initio valence bond (VB) method incorporating the efficiency of molecular orbital (MO) theory. It can generate the wavefunction for a resonance structure or diabatic state self-consistently by partitioning the overall electrons and primitive orbitals into several subgroups and expanding each block-localized molecular orbital in only one subspace. Although block-localized molecular orbitals in the same subspace are constrained to be orthogonal (a feature of MO theory), orbitals between different subspaces are generally nonorthogonal (a feature of VB theory). The BLW method is particularly useful in the quantification of the electron delocalization (resonance) effect within a molecule and the charge-transfer effect between molecules. In this paper, we extend the BLW method to the density functional theory (DFT) level and implement the BLW-DFT method to the quantum mechanical software GAMESS. Test applications to the pi conjugation in the planar allyl radical and ions with the basis sets of 6-31G(d), 6-31+G(d), 6-311+G(d,p), and cc-pVTZ show that the basis set dependency is insignificant. In addition, the BLW-DFT method can also be used to elucidate the nature of intermolecular interactions. Examples of pi-cation interactions and solute-solvent interactions will be presented and discussed. By expressing each diabatic state with one BLW, the BLW method can be further used to study chemical reactions and electron-transfer processes whose potential energy surfaces are typically described by two or more diabatic states.
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Grosso, Marcos; Kalstein, Adrian; Parisi, Gustavo; Roitberg, Adrian E; Fernandez-Alberti, Sebastian
2015-06-28
The native state of a protein consists of an equilibrium of conformational states on an energy landscape rather than existing as a single static state. The co-existence of conformers with different ligand-affinities in a dynamical equilibrium is the basis for the conformational selection model for ligand binding. In this context, the development of theoretical methods that allow us to analyze not only the structural changes but also changes in the fluctuation patterns between conformers will contribute to elucidate the differential properties acquired upon ligand binding. Molecular dynamics simulations can provide the required information to explore these features. Its use in combination with subsequent essential dynamics analysis allows separating large concerted conformational rearrangements from irrelevant fluctuations. We present a novel procedure to define the size and composition of essential dynamics subspaces associated with ligand-bound and ligand-free conformations. These definitions allow us to compare essential dynamics subspaces between different conformers. Our procedure attempts to emphasize the main similarities and differences between the different essential dynamics in an unbiased way. Essential dynamics subspaces associated to conformational transitions can also be analyzed. As a test case, we study the glutaminase interacting protein (GIP), composed of a single PDZ domain. Both GIP ligand-free state and glutaminase L peptide-bound states are analyzed. Our findings concerning the relative changes in the flexibility pattern upon binding are in good agreement with experimental Nuclear Magnetic Resonance data.
NASA Astrophysics Data System (ADS)
Grosso, Marcos; Kalstein, Adrian; Parisi, Gustavo; Roitberg, Adrian E.; Fernandez-Alberti, Sebastian
2015-06-01
The native state of a protein consists of an equilibrium of conformational states on an energy landscape rather than existing as a single static state. The co-existence of conformers with different ligand-affinities in a dynamical equilibrium is the basis for the conformational selection model for ligand binding. In this context, the development of theoretical methods that allow us to analyze not only the structural changes but also changes in the fluctuation patterns between conformers will contribute to elucidate the differential properties acquired upon ligand binding. Molecular dynamics simulations can provide the required information to explore these features. Its use in combination with subsequent essential dynamics analysis allows separating large concerted conformational rearrangements from irrelevant fluctuations. We present a novel procedure to define the size and composition of essential dynamics subspaces associated with ligand-bound and ligand-free conformations. These definitions allow us to compare essential dynamics subspaces between different conformers. Our procedure attempts to emphasize the main similarities and differences between the different essential dynamics in an unbiased way. Essential dynamics subspaces associated to conformational transitions can also be analyzed. As a test case, we study the glutaminase interacting protein (GIP), composed of a single PDZ domain. Both GIP ligand-free state and glutaminase L peptide-bound states are analyzed. Our findings concerning the relative changes in the flexibility pattern upon binding are in good agreement with experimental Nuclear Magnetic Resonance data.