Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu
2017-07-15
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
NASA Astrophysics Data System (ADS)
Tsilifis, Panagiotis; Ghanem, Roger G.
2017-07-01
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...
2016-10-21
The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less
Factor analysis of auto-associative neural networks with application in speaker verification.
Garimella, Sri; Hermansky, Hynek
2013-04-01
Auto-associative neural network (AANN) is a fully connected feed-forward neural network, trained to reconstruct its input at its output through a hidden compression layer, which has fewer numbers of nodes than the dimensionality of input. AANNs are used to model speakers in speaker verification, where a speaker-specific AANN model is obtained by adapting (or retraining) the universal background model (UBM) AANN, an AANN trained on multiple held out speakers, using corresponding speaker data. When the amount of speaker data is limited, this adaptation procedure may lead to overfitting as all the parameters of UBM-AANN are adapted. In this paper, we introduce and develop the factor analysis theory of AANNs to alleviate this problem. We hypothesize that only the weight matrix connecting the last nonlinear hidden layer and the output layer is speaker-specific, and further restrict it to a common low-dimensional subspace during adaptation. The subspace is learned using large amounts of development data, and is held fixed during adaptation. Thus, only the coordinates in a subspace, also known as i-vector, need to be estimated using speaker-specific data. The update equations are derived for learning both the common low-dimensional subspace and the i-vectors corresponding to speakers in the subspace. The resultant i-vector representation is used as a feature for the probabilistic linear discriminant analysis model. The proposed system shows promising results on the NIST-08 speaker recognition evaluation (SRE), and yields a 23% relative improvement in equal error rate over the previously proposed weighted least squares-based subspace AANNs system. The experiments on NIST-10 SRE confirm that these improvements are consistent and generalize across datasets.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Gravitational instantons, self-duality, and geometric flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourliot, F.; Estes, J.; Petropoulos, P. M.
2010-05-15
We discuss four-dimensional 'spatially homogeneous' gravitational instantons. These are self-dual solutions of Euclidean vacuum Einstein equations. They are endowed with a product structure RxM{sub 3} leading to a foliation into three-dimensional subspaces evolving in Euclidean time. For a large class of homogeneous subspaces, the dynamics coincides with a geometric flow on the three-dimensional slice, driven by the Ricci tensor plus an so(3) gauge connection. The flowing metric is related to the vielbein of the subspace, while the gauge field is inherited from the anti-self-dual component of the four-dimensional Levi-Civita connection.
Subspace-based interference removal methods for a multichannel biomagnetic sensor array.
Sekihara, Kensuke; Nagarajan, Srikantan S
2017-10-01
In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Subspace-based interference removal methods for a multichannel biomagnetic sensor array
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Nagarajan, Srikantan S.
2017-10-01
Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Multi-subject subspace alignment for non-stationary EEG-based emotion recognition.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Liu, Xin; Liu, Dan; Bai, Ou
2018-01-01
Emotion recognition based on EEG signals is a critical component in Human-Machine collaborative environments and psychiatric health diagnoses. However, EEG patterns have been found to vary across subjects due to user fatigue, different electrode placements, and varying impedances, etc. This problem renders the performance of EEG-based emotion recognition highly specific to subjects, requiring time-consuming individual calibration sessions to adapt an emotion recognition system to new subjects. Recently, domain adaptation (DA) strategies have achieved a great deal success in dealing with inter-subject adaptation. However, most of them can only adapt one subject to another subject, which limits their applicability in real-world scenarios. To alleviate this issue, a novel unsupervised DA strategy called Multi-Subject Subspace Alignment (MSSA) is proposed in this paper, which takes advantage of subspace alignment solution and multi-subject information in a unified framework to build personalized models without user-specific labeled data. Experiments on a public EEG dataset known as SEED verify the effectiveness and superiority of MSSA over other state of the art methods for dealing with multi-subject scenarios.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Resta, Ferruccio; Borroni, Massimo; Cazzulani, Gabriele
2014-04-01
A new method for the real-time identification of mechanical system modal parameters is used in order to design different adaptive control logics aiming to reduce the vibrations in a carbon fiber plate smart structure. It is instrumented with three piezoelectric actuators, three accelerometers and three strain gauges. The real-time identification is based on a recursive subspace tracking algorithm whose outputs are elaborated by an ARMA model. A statistical approach is finally applied to choose the modal parameter correct values. These are given in input to model-based control logics such as a gain scheduling and an adaptive LQR control.
Basis adaptation in homogeneous chaos spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Ghanem, Roger
2014-02-01
We present a new meth for the characterization of subspaces associated with low-dimensional quantities of interet (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
Self-correcting random number generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Pooser, Raphael C.
2016-09-06
A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan
2017-09-01
It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.
Reboredo, Fernando A; Kim, Jeongnim
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
NASA Astrophysics Data System (ADS)
Reboredo, Fernando A.; Kim, Jeongnim
2014-02-01
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Target detection using the background model from the topological anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.
2013-05-01
The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.
Minimal Krylov Subspaces for Dimension Reduction
2013-01-01
these applications realized a maximal compute time improvement with minimal Krylov subspaces. More recently, Halko et . al . [36] have investigated... Halko et . al . proposed a variety of them in [36], but we focus on the “direct eigenvalue approximation for Hermitian matri- ces with random...result due to Halko et . al . Theorem 5 ( Halko , Martinsson and Tropp [36]). Let A ∈ Rn×m be the input matrix with partitioned singular value
Highly Entangled, Non-random Subspaces of Tensor Products from Quantum Groups
NASA Astrophysics Data System (ADS)
Brannan, Michael; Collins, Benoît
2018-03-01
In this paper we describe a class of highly entangled subspaces of a tensor product of finite-dimensional Hilbert spaces arising from the representation theory of free orthogonal quantum groups. We determine their largest singular values and obtain lower bounds for the minimum output entropy of the corresponding quantum channels. An application to the construction of d-positive maps on matrix algebras is also presented.
Subspace Clustering via Learning an Adaptive Low-Rank Graph.
Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin
2018-08-01
By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
Parameter identifiability and regional calibration for reservoir inflow prediction
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Engeland, Kolbjørn; Tøfte, Lena S.; Bruland, Oddbjørn
2013-04-01
The large hydropower producer Statkraft is currently testing regional, distributed models for operational reservoir inflow prediction. The need for simultaneous forecasts and consistent updating in a large number of catchments supports the shift from catchment-oriented to regional models. Low-quality naturalized inflow series in the reservoir catchments further encourages the use of donor catchments and regional simulation for calibration purposes. MCMC based parameter estimation (the Dream algorithm; Vrugt et al, 2009) is adapted to regional parameter estimation, and implemented within the open source ENKI framework. The likelihood is based on the concept of effectively independent number of observations, spatially as well as in time. Marginal and conditional (around an optimum) parameter distributions for each catchment may be extracted, even though the MCMC algorithm itself is guided only by the regional likelihood surface. Early results indicate that the average performance loss associated with regional calibration (difference in Nash-Sutcliffe R2 between regionally and locally optimal parameters) is in the range of 0.06. The importance of the seasonal snow storage and melt in Norwegian mountain catchments probably contributes to the high degree of similarity among catchments. The evaluation continues for several regions, focusing on posterior parameter uncertainty and identifiability. Vrugt, J. A., C. J. F. ter Braak, C. G. H. Diks, B. A. Robinson, J. M. Hyman and D. Higdon: Accelerating Markov Chain Monte Carlo Simulation by Differential Evolution with Self-Adaptive Randomized Subspace Sampling. Int. J. of nonlinear sciences and numerical simulation 10, 3, 273-290, 2009.
Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers
NASA Astrophysics Data System (ADS)
Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.
2018-04-01
Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Reboredo, Fernando A.
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systemsmore » near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.« less
Oswald, Tasha M; Winder-Patel, Breanna; Ruder, Steven; Xing, Guibo; Stahmer, Aubyn; Solomon, Marjorie
2018-05-01
The purpose of this pilot randomized controlled trial was to investigate the acceptability and efficacy of the Acquiring Career, Coping, Executive control, Social Skills (ACCESS) Program, a group intervention tailored for young adults with autism spectrum disorder (ASD) to enhance critical skills and beliefs that promote adult functioning, including social and adaptive skills, self-determination skills, and coping self-efficacy. Forty-four adults with ASD (ages 18-38; 13 females) and their caregivers were randomly assigned to treatment or waitlist control. Compared to controls, adults in treatment significantly improved in adaptive and self-determination skills, per caregiver report, and self-reported greater belief in their ability to access social support to cope with stressors. Results provide evidence for the acceptability and efficacy of the ACCESS Program.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model
NASA Astrophysics Data System (ADS)
Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.
2018-03-01
Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.
Self-duality in higher dimensions
NASA Astrophysics Data System (ADS)
Bilge, A. H.; Dereli, T.; Kocak, S.
2017-01-01
Let ω be a 2-form on a 2n dimensional manifold. In previous work, we called ω “strong self-dual, if the eigenvalues of its matrix with respect to an orthonormal frame are equal in absolute value. In a series of papers, we showed that strong self-duality agrees with previous definitions; in particular if ω is strong self-dual, then, in 2n dimensions, ωn is proportional to its Hodge dual ω and in 4n dimensions, ωn is Hodge self-dual. We also obtained a local expression of the Bonan 4-form on 8 manifolds with Spin 7 holonomy, as the sum of the squares of any orthonormal basis of a maximal linear subspace of strong self-dual 2-forms. In the present work we generalize the notion of strong self-duality to odd dimensional manifolds and we express the dual of the Fundamental 3-form 7 manifolds with G 2 holonomy, as a sum of the squares of an orthonormal basis of a maximal linear subspace of strong self-dual 2-forms.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals.
Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals
Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboredo, Fernando A.; Kim, Jeongnim
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less
NASA Astrophysics Data System (ADS)
Thallmair, Sebastian; Roos, Matthias K.; de Vivie-Riedle, Regina
2016-06-01
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.
Thallmair, Sebastian; Roos, Matthias K; de Vivie-Riedle, Regina
2016-06-21
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstrated for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.
A modified Lax-Phillips scattering theory for quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Y., E-mail: ystrauss@cs.bgu.ac.il
The Lax-Phillips scattering theory is an appealing abstract framework for the analysis of scattering resonances. Quantum mechanical adaptations of the theory have been proposed. However, since these quantum adaptations essentially retain the original structure of the theory, assuming the existence of incoming and outgoing subspaces for the evolution and requiring the spectrum of the generator of evolution to be unbounded from below, their range of applications is rather limited. In this paper, it is shown that if we replace the assumption regarding the existence of incoming and outgoing subspaces by the assumption of the existence of Lyapunov operators for themore » quantum evolution (the existence of which has been proved for certain classes of quantum mechanical scattering problems), then it is possible to construct a structure analogous to the Lax-Phillips structure for scattering problems for which the spectrum of the generator of evolution is bounded from below.« less
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Colliandre, Lionel; Le Guilloux, Vincent; Bourg, Stephane; Morin-Allory, Luc
2012-02-27
High Throughput Screening (HTS) is a standard technique widely used to find hit compounds in drug discovery projects. The high costs associated with such experiments have highlighted the need to carefully design screening libraries in order to avoid wasting resources. Molecular diversity is an established concept that has been used to this end for many years. In this article, a new approach to quantify the molecular diversity of screening libraries is presented. The approach is based on the Delimited Reference Chemical Subspace (DRCS) methodology, a new method that can be used to delimit the densest subspace spanned by a reference library in a reduced 2D continuous space. A total of 22 diversity indices were implemented or adapted to this methodology, which is used here to remove outliers and obtain a relevant cell-based partition of the subspace. The behavior of these indices was assessed and compared in various extreme situations and with respect to a set of theoretical rules that a diversity function should satisfy when libraries of different sizes have to be compared. Some gold standard indices are found inappropriate in such a context, while none of the tested indices behave perfectly in all cases. Five DRCS-based indices accounting for different aspects of diversity were finally selected, and a simple framework is proposed to use them effectively. Various libraries have been profiled with respect to more specific subspaces, which further illustrate the interest of the method.
Random ensemble learning for EEG classification.
Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid
2018-01-01
Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Polarimetric subspace target detector for SAR data based on the Huynen dihedral model
NASA Astrophysics Data System (ADS)
Larson, Victor J.; Novak, Leslie M.
1995-06-01
Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thallmair, Sebastian; Lehrstuhl für BioMolekulare Optik, Ludwig-Maximilians-Universität München, D-80538 München; Roos, Matthias K.
Quantum dynamics simulations require prior knowledge of the potential energy surface as well as the kinetic energy operator. Typically, they are evaluated in a low-dimensional subspace of the full configuration space of the molecule as its dimensionality increases proportional to the number of atoms. This entails the challenge to find the most suitable subspace. We present an approach to design specially adapted reactive coordinates spanning this subspace. In addition to the essential geometric changes, these coordinates take into account the relaxation of the non-reactive coordinates without the necessity of performing geometry optimizations at each grid point. The method is demonstratedmore » for an ultrafast photoinduced bond cleavage in a commonly used organic precursor for the generation of electrophiles. The potential energy surfaces for the reaction as well as the Wilson G-matrix as part of the kinetic energy operator are shown for a complex chemical reaction, both including the relaxation of the non-reactive coordinates on equal footing. A microscopic interpretation of the shape of the G-matrix elements allows to analyze the impact of the non-reactive coordinates on the kinetic energy operator. Additionally, we compare quantum dynamics simulations with and without the relaxation of the non-reactive coordinates included in the kinetic energy operator to demonstrate its influence.« less
Extended Lagrangian Excited State Molecular Dynamics
Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei; ...
2018-01-09
In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less
Extended Lagrangian Excited State Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei
In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less
Extended Lagrangian Excited State Molecular Dynamics.
Bjorgaard, J A; Sheppard, D; Tretiak, S; Niklasson, A M N
2018-02-13
An extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born-Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both for the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. The XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree-Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).
Pipeline Processing with an Iterative, Context-based Detection Model
2014-04-19
stripping the incoming data stream of repeating and irrelevant signals prior to running primary detectors , adaptive beamforming and matched field processing...framework, pattern detectors , correlation detectors , subspace detectors , matched field detectors , nuclear explosion monitoring 16. SECURITY CLASSIFICATION...10 5. Teleseismic paths from earthquakes in
Ren, Fulong; Cao, Peng; Li, Wei; Zhao, Dazhe; Zaiane, Osmar
2017-01-01
Diabetic retinopathy (DR) is a progressive disease, and its detection at an early stage is crucial for saving a patient's vision. An automated screening system for DR can help in reduce the chances of complete blindness due to DR along with lowering the work load on ophthalmologists. Among the earliest signs of DR are microaneurysms (MAs). However, current schemes for MA detection appear to report many false positives because detection algorithms have high sensitivity. Inevitably some non-MAs structures are labeled as MAs in the initial MAs identification step. This is a typical "class imbalance problem". Class imbalanced data has detrimental effects on the performance of conventional classifiers. In this work, we propose an ensemble based adaptive over-sampling algorithm for overcoming the class imbalance problem in the false positive reduction, and we use Boosting, Bagging, Random subspace as the ensemble framework to improve microaneurysm detection. The ensemble based over-sampling methods we proposed combine the strength of adaptive over-sampling and ensemble. The objective of the amalgamation of ensemble and adaptive over-sampling is to reduce the induction biases introduced from imbalanced data and to enhance the generalization classification performance of extreme learning machines (ELM). Experimental results show that our ASOBoost method has higher area under the ROC curve (AUC) and G-mean values than many existing class imbalance learning methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2015-02-01
Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.
Herrero-Hahn, Raquel; Rojas, Juan Guillermo; Ospina-Díaz, Juan Manuel; Montoya-Juárez, Rafael; Restrepo-Medrano, Juan Carlos; Hueso-Montoro, César
2017-03-01
The level of cultural self-efficacy indicates the degree of confidence nursing professionals possess for their ability to provide culturally competent care. Cultural adaptation and validation of the Cultural Self-Efficacy Scale was performed for nursing professionals in Colombia. A scale validation study was conducted. Cultural adaptation and validation of the Cultural Self-Efficacy Scale was performed using a sample of 190 nurses in Colombia, between September 2013 and April 2014. This sample was chosen via systematic random sampling from a finite population. The scale was culturally adapted. Cronbach's alpha for the revised scale was .978. Factor analysis revealed the existence of six factors grouped in three dimensions that explained 68% of the variance. The results demonstrated that the version of the Cultural Self-Efficacy Scale adapted to the Colombian context is a valid and reliable instrument for determining the level of cultural self-efficacy of nursing professionals.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Zi, Yanyang; Yan, Ruqiang
2017-09-01
The gearbox of a wind turbine (WT) has dominant failure rates and highest downtime loss among all WT subsystems. Thus, gearbox health assessment for maintenance cost reduction is of paramount importance. The concurrence of multiple faults in gearbox components is a common phenomenon due to fault induction mechanism. This problem should be considered before planning to replace the components of the WT gearbox. Therefore, the key fault patterns should be reliably identified from noisy observation data for the development of an effective maintenance strategy. However, most of the existing studies focusing on multiple fault diagnosis always suffer from inappropriate division of fault information in order to satisfy various rigorous decomposition principles or statistical assumptions, such as the smooth envelope principle of ensemble empirical mode decomposition and the mutual independence assumption of independent component analysis. Thus, this paper presents a joint subspace learning-based multiple fault detection (JSL-MFD) technique to construct different subspaces adaptively for different fault patterns. Its main advantage is its capability to learn multiple fault subspaces directly from the observation signal itself. It can also sparsely concentrate the feature information into a few dominant subspace coefficients. Furthermore, it can eliminate noise by simply performing coefficient shrinkage operations. Consequently, multiple fault patterns are reliably identified by utilizing the maximum fault information criterion. The superiority of JSL-MFD in multiple fault separation and detection is comprehensively investigated and verified by the analysis of a data set of a 750 kW WT gearbox. Results show that JSL-MFD is superior to a state-of-the-art technique in detecting hidden fault patterns and enhancing detection accuracy.
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
Blended particle filters for large-dimensional chaotic dynamical systems
Majda, Andrew J.; Qi, Di; Sapsis, Themistoklis P.
2014-01-01
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
Protecting the entanglement of twisted photons by adaptive optics
NASA Astrophysics Data System (ADS)
Leonhard, Nina; Sorelli, Giacomo; Shatokhin, Vyacheslav N.; Reinlein, Claudia; Buchleitner, Andreas
2018-01-01
We study the efficiency of adaptive optics (AO) correction for the free-space propagation of entangled photonic orbital-angular-momentum (OAM) qubit states to reverse moderate atmospheric turbulence distortions. We show that AO can significantly reduce crosstalk to modes within and outside the encoding subspace and thereby stabilize entanglement against turbulence. This method establishes a reliable quantum channel for OAM photons in turbulence, and it enhances the threshold turbulence strength for secure quantum communication by at least a factor 2.
Human Guidance Behavior Decomposition and Modeling
NASA Astrophysics Data System (ADS)
Feit, Andrew James
Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.
Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E
2018-06-12
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).
Kacaroglu Vicdan, Ayse; Gulseven Karabacak, Bilgi
2016-01-01
The Roy Adaptation Model examines the individual in 4 fields: physiological mode, self-concept mode, role function mode, and interdependence mode. Hemodialysis treatment is associated with the Roy Adaptation Model as it involves fields that might be needed by the individual with chronic renal disease. This research was conducted as randomized controlled experiment with the aim of determining the effect of the education given in accordance with the Roy Adaptation Model on physiological, psychological, and social adaptation of individuals undergoing hemodialysis treatment. This was a random controlled experimental study. The study was conducted at a dialysis center in Konya-Aksehir in Turkey between July 1 and December 31, 2012. The sample was composed of 82 individuals-41 experimental and 41 control. In the second interview, there was a decrease in the systolic blood pressures and body weights of the experimental group, an increase in the scores of functional performance and self-respect, and a decrease in the scores of psychosocial adaptation. In the control group, on the other hand, there was a decrease in the scores of self-respect and an increase in the scores of psychosocial adaptation. The 2 groups were compared in terms of adaptation variables and a difference was determined on behalf of the experimental group. The training that was provided and evaluated for individuals receiving hemodialysis according to 4 modes of the Roy Adaptation Model increased physical, psychological, and social adaptation.
Ameling, Jessica M.; Ephraim, Patti L.; Bone, Lee R.; Levine, David M.; Roter, Debra L.; Wolff, Jennifer L.; Hill-Briggs, Felicia; Fitzpatrick, Stephanie L.; Noronha, Gary J.; Fagan, Peter J.; Lewis-Boyer, LaPricia; Hickman, Debra; Simmons, Michelle; Purnell, Leon; Fisher, Annette; Cooper, Lisa A.; Aboumatar, Hanan J.; Albert, Michael C.; Flynn, Sarah J.; Boulware, L. Ebony
2014-01-01
African Americans suffer disproportionately poor hypertension control despite the availability of efficacious interventions. Using principles of community-based participatory research and implementation science, we adapted established hypertension self-management interventions to enhance interventions’ cultural relevance and potential for sustained effectiveness among urban African Americans. We obtained input from patients and their family members, their health care providers, and community members. The process required substantial time and resources, and the adapted interventions will be tested in a randomized controlled trial. PMID:24569158
Ameling, Jessica M; Ephraim, Patti L; Bone, Lee R; Levine, David M; Roter, Debra L; Wolff, Jennifer L; Hill-Briggs, Felicia; Fitzpatrick, Stephanie L; Noronha, Gary J; Fagan, Peter J; Lewis-Boyer, LaPricia; Hickman, Debra; Simmons, Michelle; Purnell, Leon; Fisher, Annette; Cooper, Lisa A; Aboumatar, Hanan J; Albert, Michael C; Flynn, Sarah J; Boulware, L Ebony
2014-01-01
African Americans suffer disproportionately poor hypertension control despite the availability of efficacious interventions. Using principles of community-based participatory research and implementation science, we adapted established hypertension self-management interventions to enhance interventions' cultural relevance and potential for sustained effectiveness among urban African Americans. We obtained input from patients and their family members, their health care providers, and community members. The process required substantial time and resources, and the adapted interventions will be tested in a randomized controlled trial.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...
2012-03-01
with each SVM discriminating between a pair of the N total speakers in the data set. The (( + 1))/2 classifiers then vote on the final...classification of a test sample. The Random Forest classifier is an ensemble classifier that votes amongst decision trees generated with each node using...Forest vote , and the effects of overtraining will be mitigated by the fact that each decision tree is overtrained differently (due to the random
ERIC Educational Resources Information Center
Schopp, Laura H.; Clark, Mary J.; Lamberson, William R.; Uhr, David J.; Minor, Marian A.
2017-01-01
The purpose of this study was to determine and compare outcomes of two voluntary workplace health management methods: an adapted worksite self-management (WSM) approach and an intensive health monitoring (IM) approach. Research participants were randomly assigned to either the WSM group or the IM group by a computer-generated list (n = 180; 92 WSM…
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Matched signal detection on graphs: Theory and application to brain imaging data classification.
Hu, Chenhui; Sepulcre, Jorge; Johnson, Keith A; Fakhri, Georges E; Lu, Yue M; Li, Quanzheng
2016-01-15
Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data. Copyright © 2015. Published by Elsevier Inc.
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
The minimal SUSY B - L model: from the unification scale to the LHC
Ovrut, Burt A.; Purves, Austin; Spinner, Sogee
2015-06-26
Here, this paper introduces a random statistical scan over the high-energy initial parameter space of the minimal SUSY B - L model — denoted as the B - L MSSM. Each initial set of points is renormalization group evolved to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ~125 GeV. The subspace of initial parameters that satisfies all such constraints is presented, shown to bemore » robust and to contain a wide range of different configurations of soft supersymmetry breaking masses. The low-energy predictions of each such “valid” point — such as the sparticle mass spectrum and, in particular, the LSP — are computed and then statistically analyzed over the full subspace of valid points. Finally, the amount of fine-tuning required is quantified and compared to the MSSM computed using an identical random scan. The B - L MSSM is shown to generically require less fine-tuninng.« less
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
Domain adaptation via transfer component analysis.
Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang
2011-02-01
Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
In many military and homeland security persistent surveillance applications, accurate detection of different skin colors in varying observability and illumination conditions is a valuable capability for video analytics. One of those applications is In-Vehicle Group Activity (IVGA) recognition, in which significant changes in observability and illumination may occur during the course of a specific human group activity of interest. Most of the existing skin color detection algorithms, however, are unable to perform satisfactorily in confined operational spaces with partial observability and occultation, as well as under diverse and changing levels of illumination intensity, reflection, and diffraction. In this paper, we investigate the salient features of ten popular color spaces for skin subspace color modeling. More specifically, we examine the advantages and disadvantages of each of these color spaces, as well as the stability and suitability of their features in differentiating skin colors under various illumination conditions. The salient features of different color subspaces are methodically discussed and graphically presented. Furthermore, we present robust and adaptive algorithms for skin color detection based on this analysis. Through examples, we demonstrate the efficiency and effectiveness of these new color skin detection algorithms and discuss their applicability for skin detection in IVGA recognition applications.
Indoor Subspacing to Implement Indoorgml for Indoor Navigation
NASA Astrophysics Data System (ADS)
Jung, H.; Lee, J.
2015-10-01
According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
Robust uncertainty evaluation for system identification on distributed wireless platforms
NASA Astrophysics Data System (ADS)
Crinière, Antoine; Döhler, Michael; Le Cam, Vincent; Mevel, Laurent
2016-04-01
Health monitoring of civil structures by system identification procedures from automatic control is now accepted as a valid approach. These methods provide frequencies and modeshapes from the structure over time. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. The underlying algorithms are usually running under Matlab under the assumption of large memory pool and considerable computational power. Even under these premises, computational and memory usage are heavy and not realistic for being embedded in on-site sensor platforms such as the PEGASE platform. Moreover, the current push for distributed wireless systems calls for algorithmic adaptation for lowering data exchanges and maximizing local processing. Finally, the recent breakthrough in system identification allows us to process both frequency information and its related uncertainty together from one and only one data sequence, at the expense of computational and memory explosion that require even more careful attention than before. The current approach will focus on presenting a system identification procedure called multi-setup subspace identification that allows to process both frequencies and their related variances from a set of interconnected wireless systems with all computation running locally within the limited memory pool of each system before being merged on a host supervisor. Careful attention will be given to data exchanges and I/O satisfying OGC standards, as well as minimizing memory footprints and maximizing computational efficiency. Those systems are built in a way of autonomous operations on field and could be later included in a wide distributed architecture such as the Cloud2SM project. The usefulness of these strategies is illustrated on data from a progressive damage action on a prestressed concrete bridge. References [1] E. Carden and P. Fanning. Vibration based condition monitoring: a review. Structural Health Monitoring, 3(4):355-377, 2004. [2] M. Döhler and L. Mevel. Efficient multi-order uncertainty computation for stochastic subspace identification. Mechanical Systems and Signal Processing, 38(2):346-366, 2013. [3] M.Döhler, L. Mevel. Modular subspace-based system identification from multi-setup measurements. IEEE Transactions on Automatic Control, 57(11):2951-2956, 2012. [4] M. Döhler, X.-B. Lam, and L. Mevel. Uncertainty quantification for modal parameters from stochastic subspace identification on multi-setup measurements. MechanicalSystems and Signal Processing, 36(2):562-581, 2013. [5] A Crinière, J Dumoulin, L Mevel, G Andrade-Barosso, M Simonin. The Cloud2SM Project.European Geosciences Union General Assembly (EGU2015), Apr 2015, Vienne, Austria. 2015.
Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network
NASA Astrophysics Data System (ADS)
Hopke, Philip K.
1999-12-01
A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.
Computing Arm Movements with a Monkey Brainet.
Ramakrishnan, Arjun; Ifft, Peter J; Pais-Vieira, Miguel; Byun, Yoon Woo; Zhuang, Katie Z; Lebedev, Mikhail A; Nicolelis, Miguel A L
2015-07-09
Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal.
Computing Arm Movements with a Monkey Brainet
Ramakrishnan, Arjun; Ifft, Peter J.; Pais-Vieira, Miguel; Woo Byun, Yoon; Zhuang, Katie Z.; Lebedev, Mikhail A.; Nicolelis, Miguel A.L.
2015-01-01
Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal. PMID:26158523
Moss, Aleezé S; Reibel, Diane K; Greeson, Jeffrey M; Thapar, Anjali; Bubb, Rebecca; Salmon, Jacqueline; Newberg, Andrew B
2015-06-01
The purpose of this study was to test the feasibility and effectiveness of an adapted 8-week Mindfulness-Based Stress Reduction (MBSR) program for elders in a continuing care community. This mixed-methods study used both quantitative and qualitative measures. A randomized waitlist control design was used for the quantitative aspect of the study. Thirty-nine elderly were randomized to MBSR (n = 20) or a waitlist control group (n = 19), mean age was 82 years. Both groups completed pre-post measures of health-related quality of life, acceptance and psychological flexibility, facets of mindfulness, self-compassion, and psychological distress. A subset of MBSR participants completed qualitative interviews. MBSR participants showed significantly greater improvement in acceptance and psychological flexibility and in role limitations due to physical health. In the qualitative interviews, MBSR participants reported increased awareness, less judgment, and greater self-compassion. Study results demonstrate the feasibility and potential effectiveness of an adapted MBSR program in promoting mind-body health for elders. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-01-01
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition. PMID:28467371
Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou
2017-05-03
Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Stability and diversity in collective adaptation
NASA Astrophysics Data System (ADS)
Sato, Yuzuru; Akiyama, Eizo; Crutchfield, James P.
2005-10-01
We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by the dynamics of uncertainty, giving a novel view of collective adaptation.
Effects of a Culture-Adaptive Forgiveness Intervention for Chinese College Students
ERIC Educational Resources Information Center
Ji, Mingxia; Hui, Eadaoin; Fu, Hong; Watkins, David; Tao, Linjin; Lo, Sing Kai
2016-01-01
The understanding and application of forgiveness varies across cultures. The current study aimed to examine the effect of a culture-adaptive Forgiveness Intervention on forgiveness attitude, self-esteem, empathy and anxiety of Mainland Chinese college students. Thirty-six participants were randomly allocated to either experimental groups or a…
Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.
Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal
2011-06-01
This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.
Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin
2013-01-01
DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance.
Learning Robust and Discriminative Subspace With Low-Rank Constraints.
Li, Sheng; Fu, Yun
2016-11-01
In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Domínguez Fernández, Julián Manuel; Padilla Segura, Inés; Domínguez Fernández, Javier; Domínguez Padilla, María
2013-04-01
To define the different patterns of behavior among workers in health care in Ceuta. Cross-sectional and descriptive. SITES AND PARTICIPANTS: 200 randomly selected workers in the Ceuta Health Care Area using a stratified sampling of workplace, job and sex. The instruments used were the MBI, the LIPT by Leymann, a reduced version of the Pinillos CEP, Musitu self concept and adaptation behavior, all adapted in the context of occupational health examinations. Principal components analysis allowed us to define 5 components, one strictly related to the scale of mobbing with 85% of weight; another for burnout with 70% weight; a third to adaptation and family satisfaction with a weight of 64%; a fourth with adaptation, control, emotional self, professional achievement and occupational self-weight of 52%; and a fifth component defined by social evaluations in the levels of extraversion and social adjustment with 73%. Highlights five different behavioral characteristics peculiar interest for clinical work are highlighted: burnout, mobbing, family work satisfaction; individual occupational and sociable satisfaction. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Predictors of Career Adaptability Skill among Higher Education Students in Nigeria
ERIC Educational Resources Information Center
Ebenehi, Amos Shaibu; Rashid, Abdullah Mat; Bakar, Ab Rahim
2016-01-01
This paper examined predictors of career adaptability skill among higher education students in Nigeria. A sample of 603 higher education students randomly selected from six colleges of education in Nigeria participated in this study. A set of self-reported questionnaire was used for data collection, and multiple linear regression analysis was used…
Sinclair, Ka'imi A; Makahi, Emily K; Shea-Solatorio, Cappy; Yoshimura, Sheryl R; Townsend, Claire K M; Kaholokula, J Keawe'aimoku
2013-02-01
Culturally adapted interventions are needed to reduce diabetes-related morbidity and mortality among Native Hawaiian and Pacific People. The purpose of this study is to pilot test the effectiveness of a culturally adapted diabetes self-management intervention. Participants were randomly assigned in an unbalanced design to the Partners in Care intervention (n = 48) or wait list control group (n = 34). Assessments of hemoglobin A1c, understanding of diabetes self-management, performance of self-care activities, and diabetes-related distress were measured at baseline and 3 months (post intervention). Analysis of covariance was used to test between-group differences. The community steering committee and focus group data informed the cultural adaptation of the intervention. There were significant baseline adjusted differences at 3 months between the Partners in Care and wait list control group in intent-to-treat (p < 0.001) and complete case analyses (p < 0.0001) for A1c, understanding (p < 0.0001), and performing diabetes self-management (p < 0.0001). A culturally adapted diabetes self-management intervention of short duration was an effective approach to improving glycemic control among Native Hawaiian and Pacific Islanders.
Adapting Pipeline Architectures to Track Developing Aftershock Sequences and Recurrent Explosions
2014-02-14
Sumatra earthquake was used to study the performance of subspace detectors to detect and classify events from within a very large (Area = ~250,000 km2... detectors to identify and organize repeating waveforms discovered in multichannel seismic data streams. The framework has been tested and evaluated on...a variety of different test cases from mining blasts in Central Asia to moderate and large earthquake aftershock sequences. The framework performs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dul, F.A.; Arczewski, K.
1994-03-01
Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less
All-in-One Shape-Adaptive Self-Charging Power Package for Wearable Electronics.
Guo, Hengyu; Yeh, Min-Hsin; Lai, Ying-Chih; Zi, Yunlong; Wu, Changsheng; Wen, Zhen; Hu, Chenguo; Wang, Zhong Lin
2016-11-22
Recently, a self-charging power unit consisting of an energy harvesting device and an energy storage device set the foundation for building a self-powered wearable system. However, the flexibility of the power unit working under extremely complex deformations (e.g., stretching, twisting, and bending) becomes a key issue. Here, we present a prototype of an all-in-one shape-adaptive self-charging power unit that can be used for scavenging random body motion energy under complex mechanical deformations and then directly storing it in a supercapacitor unit to build up a self-powered system for wearable electronics. A kirigami paper based supercapacitor (KP-SC) was designed to work as the flexible energy storage device (stretchability up to 215%). An ultrastretchable and shape-adaptive silicone rubber triboelectric nanogenerator (SR-TENG) was utilized as the flexible energy harvesting device. By combining them with a rectifier, a stretchable, twistable, and bendable, self-charging power package was achieved for sustainably driving wearable electronics. This work provides a potential platform for the flexible self-powered systems.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
The trust-region self-consistent field method in Kohn-Sham density-functional theory.
Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve
2005-08-15
The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L
2011-02-07
Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Fluid intelligence and psychosocial outcome: from logical problem solving to social adaptation.
Huepe, David; Roca, María; Salas, Natalia; Canales-Johnson, Andrés; Rivera-Rei, Álvaro A; Zamorano, Leandro; Concepción, Aimée; Manes, Facundo; Ibañez, Agustín
2011-01-01
While fluid intelligence has proved to be central to executive functioning, logical reasoning and other frontal functions, the role of this ability in psychosocial adaptation has not been well characterized. A random-probabilistic sample of 2370 secondary school students completed measures of fluid intelligence (Raven's Progressive Matrices, RPM) and several measures of psychological adaptation: bullying (Delaware Bullying Questionnaire), domestic abuse of adolescents (Conflict Tactic Scale), drug intake (ONUDD), self-esteem (Rosenberg's Self Esteem Scale) and the Perceived Mental Health Scale (Spanish adaptation). Lower fluid intelligence scores were associated with physical violence, both in the role of victim and victimizer. Drug intake, especially cannabis, cocaine and inhalants and lower self-esteem were also associated with lower fluid intelligence. Finally, scores on the perceived mental health assessment were better when fluid intelligence scores were higher. Our results show evidence of a strong association between psychosocial adaptation and fluid intelligence, suggesting that the latter is not only central to executive functioning but also forms part of a more general capacity for adaptation to social contexts.
Fluid Intelligence and Psychosocial Outcome: From Logical Problem Solving to Social Adaptation
Huepe, David; Roca, María; Salas, Natalia; Canales-Johnson, Andrés; Rivera-Rei, Álvaro A.; Zamorano, Leandro; Concepción, Aimée; Manes, Facundo; Ibañez, Agustín
2011-01-01
Background While fluid intelligence has proved to be central to executive functioning, logical reasoning and other frontal functions, the role of this ability in psychosocial adaptation has not been well characterized. Methodology/Principal Findings A random-probabilistic sample of 2370 secondary school students completed measures of fluid intelligence (Raven's Progressive Matrices, RPM) and several measures of psychological adaptation: bullying (Delaware Bullying Questionnaire), domestic abuse of adolescents (Conflict Tactic Scale), drug intake (ONUDD), self-esteem (Rosenberg's Self Esteem Scale) and the Perceived Mental Health Scale (Spanish adaptation). Lower fluid intelligence scores were associated with physical violence, both in the role of victim and victimizer. Drug intake, especially cannabis, cocaine and inhalants and lower self-esteem were also associated with lower fluid intelligence. Finally, scores on the perceived mental health assessment were better when fluid intelligence scores were higher. Conclusions/Significance Our results show evidence of a strong association between psychosocial adaptation and fluid intelligence, suggesting that the latter is not only central to executive functioning but also forms part of a more general capacity for adaptation to social contexts. PMID:21957464
Self-aggregation in scaled principal component space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris H.Q.; He, Xiaofeng; Zha, Hongyuan
2001-10-05
Automatic grouping of voluminous data into meaningful structures is a challenging task frequently encountered in broad areas of science, engineering and information processing. These data clustering tasks are frequently performed in Euclidean space or a subspace chosen from principal component analysis (PCA). Here we describe a space obtained by a nonlinear scaling of PCA in which data objects self-aggregate automatically into clusters. Projection into this space gives sharp distinctions among clusters. Gene expression profiles of cancer tissue subtypes, Web hyperlink structure and Internet newsgroups are analyzed to illustrate interesting properties of the space.
The causal perturbation expansion revisited: Rescaling the interacting Dirac sea
NASA Astrophysics Data System (ADS)
Finster, Felix; Grotz, Andreas
2010-07-01
The causal perturbation expansion defines the Dirac sea in the presence of a time-dependent external field. It yields an operator whose image generalizes the vacuum solutions of negative energy and thus gives a canonical splitting of the solution space into two subspaces. After giving a self-contained introduction to the ideas and techniques, we show that this operator is, in general, not idempotent. We modify the standard construction by a rescaling procedure giving a projector on the generalized negative-energy subspace. The resulting rescaled causal perturbation expansion uniquely defines the fermionic projector in terms of a series of distributional solutions of the Dirac equation. The technical core of the paper is to work out the combinatorics of the expansion in detail. It is also shown that the fermionic projector with interaction can be obtained from the free projector by a unitary transformation. We finally analyze the consequences of the rescaling procedure on the light-cone expansion.
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
Randomized Subspace Learning for Proline Cis-Trans Isomerization Prediction.
Al-Jarrah, Omar Y; Yoo, Paul D; Taha, Kamal; Muhaidat, Sami; Shami, Abdallah; Zaki, Nazar
2015-01-01
Proline residues are common source of kinetic complications during folding. The X-Pro peptide bond is the only peptide bond for which the stability of the cis and trans conformations is comparable. The cis-trans isomerization (CTI) of X-Pro peptide bonds is a widely recognized rate-limiting factor, which can not only induces additional slow phases in protein folding but also modifies the millisecond and sub-millisecond dynamics of the protein. An accurate computational prediction of proline CTI is of great importance for the understanding of protein folding, splicing, cell signaling, and transmembrane active transport in both the human body and animals. In our earlier work, we successfully developed a biophysically motivated proline CTI predictor utilizing a novel tree-based consensus model with a powerful metalearning technique and achieved 86.58 percent Q2 accuracy and 0.74 Mcc, which is a better result than the results (70-73 percent Q2 accuracies) reported in the literature on the well-referenced benchmark dataset. In this paper, we describe experiments with novel randomized subspace learning and bootstrap seeding techniques as an extension to our earlier work, the consensus models as well as entropy-based learning methods, to obtain better accuracy through a precise and robust learning scheme for proline CTI prediction.
Nonadiabatic holonomic quantum computation in decoherence-free subspaces.
Xu, G F; Zhang, J; Tong, D M; Sjöqvist, Erik; Kwek, L C
2012-10-26
Quantum computation that combines the coherence stabilization virtues of decoherence-free subspaces and the fault tolerance of geometric holonomic control is of great practical importance. Some schemes of adiabatic holonomic quantum computation in decoherence-free subspaces have been proposed in the past few years. However, nonadiabatic holonomic quantum computation in decoherence-free subspaces, which avoids a long run-time requirement but with all the robust advantages, remains an open problem. Here, we demonstrate how to realize nonadiabatic holonomic quantum computation in decoherence-free subspaces. By using only three neighboring physical qubits undergoing collective dephasing to encode one logical qubit, we realize a universal set of quantum gates.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Evolutionary branching under multi-dimensional evolutionary constraints.
Ito, Hiroshi; Sasaki, Akira
2016-10-21
The fitness of an existing phenotype and of a potential mutant should generally depend on the frequencies of other existing phenotypes. Adaptive evolution driven by such frequency-dependent fitness functions can be analyzed effectively using adaptive dynamics theory, assuming rare mutation and asexual reproduction. When possible mutations are restricted to certain directions due to developmental, physiological, or physical constraints, the resulting adaptive evolution may be restricted to subspaces (constraint surfaces) with fewer dimensionalities than the original trait spaces. To analyze such dynamics along constraint surfaces efficiently, we develop a Lagrange multiplier method in the framework of adaptive dynamics theory. On constraint surfaces of arbitrary dimensionalities described with equality constraints, our method efficiently finds local evolutionarily stable strategies, convergence stable points, and evolutionary branching points. We also derive the conditions for the existence of evolutionary branching points on constraint surfaces when the shapes of the surfaces can be chosen freely. Copyright © 2016 Elsevier Ltd. All rights reserved.
Position, Location, Place and Area: AN Indoor Perspective
NASA Astrophysics Data System (ADS)
Sithole, George; Zlatanova, Sisi
2016-06-01
Over the last decade, harnessing the commercial potential of smart mobile devices in indoor environments has spurred interest in indoor mapping and navigation. Users experience indoor environments differently. For this reason navigational models have to be designed to adapt to a user's personality, and to reflect as many cognitive maps as possible. This paper presents an extension of a previously proposed framework. In this extension the notion of placement is accounted for, thereby enabling one aspect of the `personalised indoor experience'. In the paper, firstly referential expressions are used as a tool to discuss the different ways of thinking of placement within indoor spaces. Next, placement is expressed in terms of the concept of Position, Location, Place and Area. Finally, the previously proposed framework is extended to include these concepts of placement. An example is provided of the use of the extended framework. Notable characteristics of the framework are: (1) Sub-spaces, resources and agents can simultaneously possess different types of placement, e.g., a person in a room can have an xyz position and a location defined by the room number. While these entities can simultaneously have different forms of placement, only one is dominant. (2) Sub-spaces, resources and agents are capable of possessing modifiers that alter their access and usage. (3) Sub-spaces inherit the modifiers of the resources or agents contained in them. (4) Unlike conventional navigational models which treat resources and obstacles as different types of entities, in the proposed framework there are only resources and whether a resource is an obstacle is determined by a modifier that determines whether a user can access the resource. The power of the framework is that it blends the geometry and topology of space, the influence of human activity within sub-spaces together with the different notions of placement in a way that is simple and yet very flexible.
Tørmoen, A J; Grøholt, B; Haga, E; Brager-Larsen, A; Miller, A; Walby, F; Stanley, B; Mehlum, L
2014-01-01
We evaluated the feasibility of DBT training, adherence, and retention preparing for a randomized controlled trial of Dialectical Behavior Therapy (DBT) adapted for Norwegian adolescents engaging in self-harming behavior and diagnosed with features of borderline personality disorder. Therapists were intensively trained and evaluated for adherence. Adherence scores, treatment retention, and present and previous self-harm were assessed. Twenty-seven patients were included (mean age 15.7 years), all of them with recent self-harming behaviors and at least 3 features of Borderline Personality Disorder. Therapists were adherent and 21 (78%) patients completed the whole treatment. Three subjects reported self-harm at the end of treatment, and urges to self-harm decreased. At follow up, 7 of 10 subjects reported no self-harm. DBT was found to be well accepted and feasible. Randomized controlled trials are required to test the effectiveness of DBT for adolescents.
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks
Abba, Sani; Lee, Jeong-A
2015-01-01
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236
An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.
Abba, Sani; Lee, Jeong-A
2015-08-18
We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.
Hypercyclic subspaces for Frechet space operators
NASA Astrophysics Data System (ADS)
Petersson, Henrik
2006-07-01
A continuous linear operator is hypercyclic if there is an such that the orbit {Tnx} is dense, and such a vector x is said to be hypercyclic for T. Recent progress show that it is possible to characterize Banach space operators that have a hypercyclic subspace, i.e., an infinite dimensional closed subspace of, except for zero, hypercyclic vectors. The following is known to hold: A Banach space operator T has a hypercyclic subspace if there is a sequence (ni) and an infinite dimensional closed subspace such that T is hereditarily hypercyclic for (ni) and Tni->0 pointwise on E. In this note we extend this result to the setting of Frechet spaces that admit a continuous norm, and study some applications for important function spaces. As an application we also prove that any infinite dimensional separable Frechet space with a continuous norm admits an operator with a hypercyclic subspace.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Self: an adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism.
Bruzzo, Angela Alessia; Vimal, Ram Lakhan Pandey
2007-12-01
In this article, we establish a model to delineate the emergence of "self" in the brain making recourse to the theory of chaos. Self is considered as the subjective experience of a subject. As essential ingredients of subjective experiences, our model includes wakefulness, re-entry, attention, memory, and proto-experiences. The stability as stated by chaos theory can potentially describe the non-linear function of "self" as sensitive to initial conditions and can characterize it as underlying order from apparently random signals. Self-similarity is discussed as a latent menace of a pathological confusion between "self" and "others". Our test hypothesis is that (1) consciousness might have emerged and evolved from a primordial potential or proto-experience in matter, such as the physical attractions and repulsions experienced by electrons, and (2) "self" arises from chaotic dynamics, self-organization and selective mechanisms during ontogenesis, while emerging post-ontogenically as an adaptive pressure driven by both volume and synaptic-neural transmission and influencing the functional connectivity of neural nets (structure).
The effect of self-distancing on adaptive versus maladaptive self-reflection in children.
Kross, Ethan; Duckworth, Angela; Ayduk, Ozlem; Tsukayama, Eli; Mischel, Walter
2011-10-01
Although children and adolescents vary in their chronic tendencies to adaptively versus maladaptively reflect over negative feelings, the psychological mechanisms underlying these different types of self-reflection among youngsters are unknown. We addressed this issue in the present research by examining the role that self-distancing plays in distinguishing adaptive versus maladaptive self-reflection among an ethnically and socioeconomically diverse sample of fifth-grade public schoolchildren. Children were randomly assigned to analyze their feelings surrounding a recent anger-related interpersonal experience from either a self-immersed or self-distanced perspective. They then rated their negative affect and described in writing the stream of thoughts they experienced when they analyzed their feelings. Children's stream-of-thought essays were content analyzed for the presence of recounting statements, reconstruing statements, and blame attributions. Path analyses indicated that children who analyzed their feelings from a self-distanced perspective focused significantly less on recounting the "hot," emotionally arousing features of their memory (i.e., what happened to me?) and relatively more on reconstruing their experience. This shift in thought content--less recounting and more reconstruing--led children in the self-distanced group to blame the other person involved in their recalled experience significantly less, which in turn led them to display significantly lower levels of emotional reactivity. These findings help delineate the psychological mechanisms that distinguish adaptive versus maladaptive forms of self-reflection over anger experiences in children. Their basic findings and clinical implications are discussed.
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla
2013-12-01
To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Kinetics of diffusion-controlled annihilation with sparse initial conditions
Ben-Naim, Eli; Krapivsky, Paul
2016-12-16
Here, we study diffusion-controlled single-species annihilation with sparse initial conditions. In this random process, particles undergo Brownian motion, and when two particles meet, both disappear. We also focus on sparse initial conditions where particles occupy a subspace of dimension δ that is embedded in a larger space of dimension d. Furthermore, we find that the co-dimension Δ = d - δ governs the behavior. All particles disappear when the co-dimension is sufficiently small, Δ ≤ 2; otherwise, a finite fraction of particles indefinitely survive. We establish the asymptotic behavior of the probability S(t) that a test particle survives until time t. When the subspace is a line, δ = 1, we find inverse logarithmic decay,more » $$S\\sim {(\\mathrm{ln}t)}^{-1}$$, in three dimensions, and a modified power-law decay, $$S\\sim (\\mathrm{ln}t){t}^{-1/2}$$, in two dimensions. In general, the survival probability decays algebraically when Δ < 2, and there is an inverse logarithmic decay at the critical co-dimension Δ = 2.« less
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Quantum subsystems: Exploring the complementarity of quantum privacy and error correction
NASA Astrophysics Data System (ADS)
Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah
2014-09-01
This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Modelling Metrics for Mine Counter Measure Operations
2014-08-01
the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2014...a random search derived by Koopman is widely used yet it assumes no angular dependence (Ref [10]). In a series of publications considering tactics...Node Placement in Sensor Localization by Optimization of Subspace Principal Angles, In Proceedings of IEEE International Conference on Acoustics
Complexity and health professions education: a basic glossary.
Mennin, Stewart
2010-08-01
The study of health professions education in the context of complexity science and complex adaptive systems involves different concepts and terminology that are likely to be unfamiliar to many health professions educators. A list of selected key terms and definitions from the literature of complexity science is provided to assist readers to navigate familiar territory from a different perspective. include agent, attractor, bifurcation, chaos, co-evolution, collective variable, complex adaptive systems, complexity science, deterministic systems, dynamical system, edge of chaos, emergence, equilibrium, far from equilibrium, fuzzy boundaries, linear system, non-linear system, random, self-organization and self-similarity.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
Parallel iterative methods for sparse linear and nonlinear equations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.
Design of fuzzy system by NNs and realization of adaptability
NASA Technical Reports Server (NTRS)
Takagi, Hideyuki
1993-01-01
The issue of designing and tuning fuzzy membership functions by neural networks (NN's) was started by NN-driven Fuzzy Reasoning in 1988. NN-driven fuzzy reasoning involves a NN embedded in the fuzzy system which generates membership values. In conventional fuzzy system design, the membership functions are hand-crafted by trial and error for each input variable. In contrast, NN-driven fuzzy reasoning considers several variables simultaneously and can design a multidimensional, nonlinear membership function for the entire subspace.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Dolan, Brigid M; Yialamas, Maria A; McMahon, Graham T
2015-09-01
There is limited research on whether online formative self-assessment and learning can change the behavior of medical professionals. We sought to determine if an adaptive longitudinal online curriculum in bone health would improve resident physicians' knowledge, and change their behavior regarding prevention of fragility fractures in women. We used a randomized control trial design in which 50 internal medicine resident physicians at a large academic practice were randomized to either receive a standard curriculum in bone health care alone, or to receive it augmented with an adaptive, longitudinal, online formative self-assessment curriculum delivered via multiple-choice questions. Outcomes were assessed 10 months after the start of the intervention. Knowledge outcomes were measured by a multiple-choice question examination. Clinical outcomes were measured by chart review, including bone density screening rate, calculation of the fracture risk assessment tool (FRAX) score, and rate of appropriate bisphosphonate prescription. Compared to the control group, residents participating in the intervention had higher scores on the knowledge test at the end of the study. Bone density screening rates and appropriate use of bisphosphonates were significantly higher in the intervention group compared with the control group. FRAX score reporting did not differ between the groups. Residents participating in a novel adaptive online curriculum outperformed peers in knowledge of fragility fracture prevention and care practices to prevent fracture. Online adaptive education can change behavior to improve patient care.
Dolan, Brigid M.; Yialamas, Maria A.; McMahon, Graham T.
2015-01-01
Background There is limited research on whether online formative self-assessment and learning can change the behavior of medical professionals. Objective We sought to determine if an adaptive longitudinal online curriculum in bone health would improve resident physicians' knowledge, and change their behavior regarding prevention of fragility fractures in women. Methods We used a randomized control trial design in which 50 internal medicine resident physicians at a large academic practice were randomized to either receive a standard curriculum in bone health care alone, or to receive it augmented with an adaptive, longitudinal, online formative self-assessment curriculum delivered via multiple-choice questions. Outcomes were assessed 10 months after the start of the intervention. Knowledge outcomes were measured by a multiple-choice question examination. Clinical outcomes were measured by chart review, including bone density screening rate, calculation of the fracture risk assessment tool (FRAX) score, and rate of appropriate bisphosphonate prescription. Results Compared to the control group, residents participating in the intervention had higher scores on the knowledge test at the end of the study. Bone density screening rates and appropriate use of bisphosphonates were significantly higher in the intervention group compared with the control group. FRAX score reporting did not differ between the groups. Conclusions Residents participating in a novel adaptive online curriculum outperformed peers in knowledge of fragility fracture prevention and care practices to prevent fracture. Online adaptive education can change behavior to improve patient care. PMID:26457142
Beyond union of subspaces: Subspace pursuit on Grassmann manifold for data representation
Shen, Xinyue; Krim, Hamid; Gu, Yuantao
2016-03-01
Discovering the underlying structure of a high-dimensional signal or big data has always been a challenging topic, and has become harder to tackle especially when the observations are exposed to arbitrary sparse perturbations. Here in this paper, built on the model of a union of subspaces (UoS) with sparse outliers and inspired by a basis pursuit strategy, we exploit the fundamental structure of a Grassmann manifold, and propose a new technique of pursuing the subspaces systematically by solving a non-convex optimization problem using the alternating direction method of multipliers. This problem as noted is further complicated by non-convex constraints onmore » the Grassmann manifold, as well as the bilinearity in the penalty caused by the subspace bases and coefficients. Nevertheless, numerical experiments verify that the proposed algorithm, which provides elegant solutions to the sub-problems in each step, is able to de-couple the subspaces and pursue each of them under time-efficient parallel computation.« less
Holland's SDS Applied to Chinese College Students: A Revisit to Cross-Culture Adaptation
ERIC Educational Resources Information Center
Kong, Jin; Xu, Yonghong Jade; Zhang, Hao
2016-01-01
In this study, data collected from 875 college freshman and sophomore students enrolled in a 4-year university in central China are used to examine the applicability and validity of a Chinese version of Holland's Self-Directed Search (SDS) that was adapted in the 1990s. The total sample was randomly divided into two groups. Data from the first…
Catalytic micromotor generating self-propelled regular motion through random fluctuation.
Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa
2013-07-21
Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.
Catalytic micromotor generating self-propelled regular motion through random fluctuation
NASA Astrophysics Data System (ADS)
Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa
2013-07-01
Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence
2017-01-01
Background: Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods: Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results: Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space (p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions: Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization. PMID:28375173
Zeba, Augustin Nawidimbasba; Yaméogo, Marceline Téné; Tougouma, Somnoma Jean-Baptiste; Kassié, Daouda; Fournet, Florence
2017-04-04
Background : Unplanned urbanization plays a key role in chronic disease growth. This population-based cross-sectional study assessed the occurrence of cardiometabolic risk factors in Bobo-Dioulasso and their association with urbanization conditions. Methods : Through spatial sampling, four Bobo-Dioulasso sub-spaces were selected for a population survey to measure the adult health status. Yéguéré, Dogona, Tounouma and Secteur 25 had very different urbanization conditions (position within the city; time of creation and healthcare structure access). The sample size was estimated at 1000 households (250 for each sub-space) in which one adult (35 to 59-year-old) was randomly selected. Finally, 860 adults were surveyed. Anthropometric, socioeconomic and clinical data were collected. Arterial blood pressure was measured and blood samples were collected to assess glycemia. Results : Weight, body mass index and waist circumference (mean values) and serum glycemia (83.4 mg/dL ± 4.62 mmol/L) were significantly higher in Tounouma, Dogona, and Secteur 25 than in Yéguéré; the poorest and most rural-like sub-space ( p = 0.001). Overall, 43.2%, 40.5%, 5.3% and 60.9% of participants had overweight, hypertension, hyperglycemia and one or more cardiometabolic risk markers, respectively. Conclusions : Bobo-Dioulasso is unprepared to face this public health issue and urgent responses are needed to reduce the health risks associated with unplanned urbanization.
Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza
2016-09-01
Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Dimension Reduction With Extreme Learning Machine.
Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou
2016-08-01
Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Active Subspaces for Wind Plant Surrogate Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N; Quick, Julian; Dykes, Katherine L
Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less
Formulating face verification with semidefinite programming.
Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S
2007-11-01
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
2008-07-01
operators in Hilbert spaces. The homogenization procedure through successive multi- resolution projections is presented, followed by a numerical example of...is intended to be essentially self-contained. The mathematical ( Greenberg 1978; Gilbert 2006) and signal processing (Strang and Nguyen 1995...literature listed in the references. The ideas behind multi-resolution analysis unfold from the theory of linear operators in Hilbert spaces (Davis 1975
Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva
2010-03-01
This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.
Block-localized wavefunction (BLW) method at the density functional theory (DFT) level.
Mo, Yirong; Song, Lingchun; Lin, Yuchun
2007-08-30
The block-localized wavefunction (BLW) approach is an ab initio valence bond (VB) method incorporating the efficiency of molecular orbital (MO) theory. It can generate the wavefunction for a resonance structure or diabatic state self-consistently by partitioning the overall electrons and primitive orbitals into several subgroups and expanding each block-localized molecular orbital in only one subspace. Although block-localized molecular orbitals in the same subspace are constrained to be orthogonal (a feature of MO theory), orbitals between different subspaces are generally nonorthogonal (a feature of VB theory). The BLW method is particularly useful in the quantification of the electron delocalization (resonance) effect within a molecule and the charge-transfer effect between molecules. In this paper, we extend the BLW method to the density functional theory (DFT) level and implement the BLW-DFT method to the quantum mechanical software GAMESS. Test applications to the pi conjugation in the planar allyl radical and ions with the basis sets of 6-31G(d), 6-31+G(d), 6-311+G(d,p), and cc-pVTZ show that the basis set dependency is insignificant. In addition, the BLW-DFT method can also be used to elucidate the nature of intermolecular interactions. Examples of pi-cation interactions and solute-solvent interactions will be presented and discussed. By expressing each diabatic state with one BLW, the BLW method can be further used to study chemical reactions and electron-transfer processes whose potential energy surfaces are typically described by two or more diabatic states.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST
NASA Astrophysics Data System (ADS)
Hang, Xu; Jun, Zhao
2018-05-01
Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.
ERIC Educational Resources Information Center
Oswald, Tasha M.; Winder-Patel, Breanna; Ruder, Steven; Xing, Guibo; Stahmer, Aubyn; Solomon, Marjorie
2018-01-01
The purpose of this pilot randomized controlled trial was to investigate the acceptability and efficacy of the Acquiring Career, Coping, Executive control, Social Skills (ACCESS) Program, a group intervention tailored for young adults with autism spectrum disorder (ASD) to enhance critical skills and beliefs that promote adult functioning,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Ichitaro; Wu, Kesheng; Simon, Horst
2008-10-27
The original software package TRLan, [TRLan User Guide], page 24, implements the thick restart Lanczos method, [Wu and Simon 2001], page 24, for computing eigenvalues {lambda} and their corresponding eigenvectors v of a symmetric matrix A: Av = {lambda}v. Its effectiveness in computing the exterior eigenvalues of a large matrix has been demonstrated, [LBNL-42982], page 24. However, its performance strongly depends on the user-specified dimension of a projection subspace. If the dimension is too small, TRLan suffers from slow convergence. If it is too large, the computational and memory costs become expensive. Therefore, to balance the solution convergence and costs,more » users must select an appropriate subspace dimension for each eigenvalue problem at hand. To free users from this difficult task, nu-TRLan, [LNBL-1059E], page 23, adjusts the subspace dimension at every restart such that optimal performance in solving the eigenvalue problem is automatically obtained. This document provides a user guide to the nu-TRLan software package. The original TRLan software package was implemented in Fortran 90 to solve symmetric eigenvalue problems using static projection subspace dimensions. nu-TRLan was developed in C and extended to solve Hermitian eigenvalue problems. It can be invoked using either a static or an adaptive subspace dimension. In order to simplify its use for TRLan users, nu-TRLan has interfaces and features similar to those of TRLan: (1) Solver parameters are stored in a single data structure called trl-info, Chapter 4 [trl-info structure], page 7. (2) Most of the numerical computations are performed by BLAS, [BLAS], page 23, and LAPACK, [LAPACK], page 23, subroutines, which allow nu-TRLan to achieve optimized performance across a wide range of platforms. (3) To solve eigenvalue problems on distributed memory systems, the message passing interface (MPI), [MPI forum], page 23, is used. The rest of this document is organized as follows. In Chapter 2 [Installation], page 2, we provide an installation guide of the nu-TRLan software package. In Chapter 3 [Example], page 3, we present a simple nu-TRLan example program. In Chapter 4 [trl-info structure], page 7, and Chapter 5 [trlan subroutine], page 14, we describe the solver parameters and interfaces in detail. In Chapter 6 [Solver parameters], page 21, we discuss the selection of the user-specified parameters. In Chapter 7 [Contact information], page 22, we give the acknowledgements and contact information of the authors. In Chapter 8 [References], page 23, we list reference to related works.« less
Bourzgui, F; Serhier, Z; Sebbar, M; Diouny, S; Bennani Othmani, M; Ngom, P I
2015-10-01
The aims of this study were to translate and culturally adapt the PIDAQ native English version into Moroccan Arabic, and to assess the psychometric characteristics of the version thereby obtained. The PIDAQ original English version was sequentially subjected to translation into Moroccan Arabic, back-translation into English, committee review, and pre-testing in 30 subjects seeking orthodontic treatment. The final Moroccan Arabic version further underwent an analysis of psychometric properties on a random sample of 99 adult subjects (84 females and 15 males, aged 20.97 ± 1.10 years). The intraclass coefficient correlation of the scores of the responses obtained after administration of the questionnaire twice at a 1-month interval to a random sample of 30 subjects ranged from 0.63 for "Self-confidence" to 0.85 for "Social Impact". Cronbach α coefficients ranging from 0.78 for "Aesthetic Concerns" to 0.87 for "Self-confidence" were obtained; the different subscales of the Moroccan Arabic version of the PIDAQ showed good correlation with the perception of aesthetics and orthodontic treatment need. The results of the present study indicate that the Moroccan Arabic version of the PIDAQ obtained following thorough adaptation of the native form is both reliable and valid. It is able to capture self-perception of orthodontic aesthetic and treatment need and is consistent with normative need for orthodontic treatment.
Penedo, Frank J; Antoni, Michael H; Moreno, Patricia I; Traeger, Lara; Perdomo, Dolores; Dahn, Jason; Miller, Gregory E; Cole, Steve; Orjuela, Julian; Pizarro, Edgar; Yanez, Betina
2018-06-14
Almost 2.8 million men in the U.S. are living with prostate cancer (PC), accounting for 40% of all male cancer survivors. Men diagnosed with prostate cancer may experience chronic and debilitating treatment side effects, including sexual and urinary dysfunction, pain and fatigue. Side effects can be stressful and can also lead to poor psychosocial functioning. Prior trials reveal that group-based cognitive behavioral stress and self-management (CBSM) is effective in reducing stress and mitigating some of these symptoms, yet little is known about the effects of culturally-translated CBSM among Spanish-speaking men with PC. This manuscript describes the rationale and study design of a multi-site, randomized controlled trial to determine whether participation in a culturally adapted cognitive behavioral stress management (C-CBSM) intervention leads to significantly greater reductions in symptom burden and improvements in health-related quality of life relative to participation in a non-culturally adapted cognitive behavioral stress management (CBSM) intervention. Participants (N = 260) will be Spanish-speaking Hispanic/Latino men randomized to the standard, non-culturally adapted CBSM intervention (e.g., cognitive behavioral strategies, stress management, and health maintenance) or the culturally adapted C-CBSM intervention (e.g., content adapted to be compatible with Hispanic/Latino cultural patterns and belief systems, meanings, values and social context) for 10 weeks. Primary outcomes (i.e., disease-specific symptom burden and health-related quality of life) will be assessed across time. We hypothesize that a culturally adapted C-CBSM intervention will be more efficacious in reducing symptom burden and improving health-related quality of life among Hispanic/Latino men when compared to a non-culturally adapted CBSM intervention. Copyright © 2017. Published by Elsevier Inc.
Cancelable biometrics realization with multispace random projections.
Teoh, Andrew Beng Jin; Yuang, Chong Tze
2007-10-01
Biometric characteristics cannot be changed; therefore, the loss of privacy is permanent if they are ever compromised. This paper presents a two-factor cancelable formulation, where the biometric data are distorted in a revocable but non-reversible manner by first transforming the raw biometric data into a fixed-length feature vector and then projecting the feature vector onto a sequence of random subspaces that were derived from a user-specific pseudorandom number (PRN). This process is revocable and makes replacing biometrics as easy as replacing PRNs. The formulation has been verified under a number of scenarios (normal, stolen PRN, and compromised biometrics scenarios) using 2400 Facial Recognition Technology face images. The diversity property is also examined.
A Model Comparison for Characterizing Protein Motions from Structure
NASA Astrophysics Data System (ADS)
David, Charles; Jacobs, Donald
2011-10-01
A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics.
Zeno subspace in quantum-walk dynamics
NASA Astrophysics Data System (ADS)
Chandrashekar, C. M.
2010-11-01
We investigate discrete-time quantum-walk evolution under the influence of periodic measurements in position subspace. The undisturbed survival probability of the particle at the position subspace P(0,t) is compared with the survival probability after frequent (n) measurements at interval τ=t/n, P(0,τ)n. We show that P(0,τ)n>P(0,t) leads to the quantum Zeno effect in position subspace when a parameter θ in the quantum coin operations and frequency of measurements is greater than the critical value, θ>θc and n>nc. This Zeno effect in the subspace preserves the dynamics in coin Hilbert space of the walk dynamics and has the potential to play a significant role in quantum tasks such as preserving the quantum state of the particle at any particular position, and to understand the Zeno dynamics in a multidimensional system that is highly transient in nature.
EEG and MEG source localization using recursively applied (RAP) MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which usesmore » the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.« less
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
Active Subspaces of Airfoil Shape Parameterizations
NASA Astrophysics Data System (ADS)
Grey, Zachary J.; Constantine, Paul G.
2018-05-01
Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.
Adiabatic evolution of decoherence-free subspaces and its shortcuts
NASA Astrophysics Data System (ADS)
Wu, S. L.; Huang, X. L.; Li, H.; Yi, X. X.
2017-10-01
The adiabatic theorem and shortcuts to adiabaticity for time-dependent open quantum systems are explored in this paper. Starting from the definition of dynamical stable decoherence-free subspace, we show that, under a compact adiabatic condition, the quantum state remains in the time-dependent decoherence-free subspace with an extremely high purity, even though the dynamics of the open quantum system may not be adiabatic. The adiabatic condition mentioned here in the adiabatic theorem for open systems is very similar to that for closed quantum systems, except that the operators required to change slowly are the Lindblad operators. We also show that the adiabatic evolution of decoherence-free subspaces depends on the existence of instantaneous decoherence-free subspaces, which requires that the Hamiltonian of open quantum systems be engineered according to the incoherent control protocol. In addition, shortcuts to adiabaticity for adiabatic decoherence-free subspaces are also presented based on the transitionless quantum driving method. Finally, we provide an example that consists of a two-level system coupled to a broadband squeezed vacuum field to show our theory. Our approach employs Markovian master equations and the theory can apply to finite-dimensional quantum open systems.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark
2015-01-01
Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed most existing the-state-of-the-art random forests. The top 25 SNPs in Parkinson data set were identified by the proposed model including four interesting genes associated with neurological disorders. The presented approach has shown to be effective in selecting informative sub-groups of SNPs potentially associated with diseases that traditional statistical approaches might fail. The new RF works well for the data where the number of case-control objects is much smaller than the number of SNPs, which is a typical problem in gene data and GWAS. Experiment results demonstrated the effectiveness of the proposed RF model that outperformed the state-of-the-art RFs, including Breiman's RF, GRRF and wsRF methods.
A new method to real-normalize measured complex modes
NASA Technical Reports Server (NTRS)
Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.
1987-01-01
A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.
Dialectical Behavior Therapy for Adolescents: Theory, Treatment Adaptations, and Empirical Outcomes
ERIC Educational Resources Information Center
MacPherson, Heather A.; Cheavens, Jennifer S.; Fristad, Mary A.
2013-01-01
Dialectical behavior therapy (DBT) was originally developed for chronically suicidal adults with borderline personality disorder (BPD) and emotion dysregulation. Randomized controlled trials (RCTs) indicate DBT is associated with improvements in problem behaviors, including suicide ideation and behavior, non-suicidal self-injury (NSSI), attrition,…
Euclidean commute time distance embedding and its application to spectral anomaly detection
NASA Astrophysics Data System (ADS)
Albano, James A.; Messinger, David W.
2012-06-01
Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.
NASA Astrophysics Data System (ADS)
Pires, Carlos A. L.; Ribeiro, Andreia F. S.
2017-02-01
We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
NASA Astrophysics Data System (ADS)
Suzuki, Akito
2008-04-01
We study a model of the quantized electromagnetic field interacting with an external static source ρ in the Feynman (Lorentz) gauge and construct the quantized radiation field Aμ (μ=0,1,2,3) as an operator-valued distribution acting on the Fock space F with an indefinite metric. By using the Gupta subsidiary condition ∂μAμ(x)(+)Ψ=0, one can select the physical subspace Vphys. According to the Gupta-Bleuler formalism, Vphys is a non-negative subspace so that elements of Vphys, called physical states, can be probabilistically interpretable. Indeed, assuming that the external source ρ is infrared regular, i.e., ρ̂/∣k∣3/2ɛL2(R3), we can characterize the physical subspace Vphys and show that Vphys is non-negative. In addition, we find that the Hamiltonian of the model is reduced to the Hamiltonian of the transverse photons with the Coulomb interaction. We, however, prove that the physical subspace is trivial, i.e., Vphys={0}, if and only if the external source ρ is infrared singular, i.e., ρ̂/∣k∣3/2∉L2(R3). We also discuss a representation different from the above representation such that the physical subspace is not trivial under the infrared singular condition.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Bourzgui, F.; Serhier, Z.; Sebbar, M.; Diouny, S.; Bennani Othmani, M.; Ngom, P.I.
2015-01-01
Objective The aims of this study were to translate and culturally adapt the PIDAQ native English version into Moroccan Arabic, and to assess the psychometric characteristics of the version thereby obtained. Materials and methods The PIDAQ original English version was sequentially subjected to translation into Moroccan Arabic, back-translation into English, committee review, and pre-testing in 30 subjects seeking orthodontic treatment. Results The final Moroccan Arabic version further underwent an analysis of psychometric properties on a random sample of 99 adult subjects (84 females and 15 males, aged 20.97 ± 1.10 years). The intraclass coefficient correlation of the scores of the responses obtained after administration of the questionnaire twice at a 1-month interval to a random sample of 30 subjects ranged from 0.63 for “Self-confidence” to 0.85 for “Social Impact”. Cronbach α coefficients ranging from 0.78 for “Aesthetic Concerns” to 0.87 for “Self-confidence” were obtained; the different subscales of the Moroccan Arabic version of the PIDAQ showed good correlation with the perception of aesthetics and orthodontic treatment need. Conclusion The results of the present study indicate that the Moroccan Arabic version of the PIDAQ obtained following thorough adaptation of the native form is both reliable and valid. It is able to capture self-perception of orthodontic aesthetic and treatment need and is consistent with normative need for orthodontic treatment. PMID:26644752
NASA Astrophysics Data System (ADS)
Kota, V. K. B.
2003-07-01
Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.
Online Process Scaffolding and Students' Self-Regulated Learning with Hypermedia.
ERIC Educational Resources Information Center
Azevedo, Roger; Cromley, Jennifer G.; Thomas, Leslie; Seibert, Diane; Tron, Myriam
This study examined the role of different scaffolding instructional interventions in facilitating students' shift to more sophisticated mental models as indicated by both performance and process data. Undergraduate students (n=53) were randomly assigned to 1 of 3 scaffolding conditions (adaptive content and process scaffolding (ACPS), adaptive…
A principle of organization which facilitates broad Lamarckian-like adaptations by improvisation.
Soen, Yoav; Knafo, Maor; Elgart, Michael
2015-12-02
During the lifetime of an organism, every individual encounters many combinations of diverse changes in the somatic genome, epigenome and microbiome. This gives rise to many novel combinations of internal failures which are unique to each individual. How any individual can tolerate this high load of new, individual-specific scenarios of failure is not clear. While stress-induced plasticity and hidden variation have been proposed as potential mechanisms of tolerance, the main conceptual problem remains unaddressed, namely: how largely non-beneficial random variation can be rapidly and safely organized into net benefits to every individual. We propose an organizational principle which explains how every individual can alleviate a high load of novel stressful scenarios using many random variations in flexible and inherently less harmful traits. Random changes which happen to reduce stress, benefit the organism and decrease the drive for additional changes. This adaptation (termed 'Adaptive Improvisation') can be further enhanced, propagated, stabilized and memorized when beneficial changes reinforce themselves by auto-regulatory mechanisms. This principle implicates stress not only in driving diverse variations in cells tissues and organs, but also in organizing these variations into adaptive outcomes. Specific (but not exclusive) examples include stress reduction by rapid exchange of mobile genetic elements (or exosomes) in unicellular, and rapid changes in the symbiotic microorganisms of animals. In all cases, adaptive changes can be transmitted across generations, allowing rapid improvement and assimilation in a few generations. We provide testable predictions derived from the hypothesis. The hypothesis raises a critical, but thus far overlooked adaptation problem and explains how random variation can self-organize to confer a wide range of individual-specific adaptations beyond the existing outcomes of natural selection. It portrays gene regulation as an inseparable synergy between natural selection and adaptation by improvisation. The latter provides a basis for Lamarckian adaptation that is not limited to a specific mechanism and readily accounts for the remarkable resistance of tumors to treatment.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Welch, Greg
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
2017-09-27
ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...originator. ARL-TR-8161•SEP 2017 US Army Research Laboratory Excluding Noise from Short Krylov Subspace Approximations to the Truncated Singular Value...unlimited. October 2015–January 2016 US Army Research Laboratory ATTN: RDRL-CIH-C Aberdeen Proving Ground, MD 21005-5066 primary author’s email
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Azarmi, Somayeh; Farsi, Zahra
2015-10-01
Any defect in extremities of the body can affect different life aspects. The purpose of this study was to investigate the effect of Roy's adaptation model-guided education on promoting the adaptation of veterans with lower extremities amputation. In a randomized clinical trial, 60 veterans with lower extremities amputation referring to Kowsar Orthotics and Prosthetics Center of veterans clinic in Tehran, Iran, were recruited with convenience method and were randomly assigned to intervention and control groups during 2013 - 2014. For data collection, Roy's adaptation model questionnaire was used. After completing the questionnaires in both groups, maladaptive behaviors were determined in the intervention group and an education program based on Roy's adaptation model was implemented. After two months, both groups completed the questionnaires again. Data was analyzed with SPSS software. Independent t-test showed statistically significant differences between the two groups in the post-test stage in terms of the total score of adaptation (P = 0.001) as well as physiologic (P = 0.0001) and role function modes (P = 0.004). The total score of adaptation (139.43 ± 5.45 to 127.54 ± 14.55, P = 0.006) as well as the scores of physiologic (60.26 ± 5.45 to 53.73 ± 7.79, P = 0.001) and role function (20.30 ± 2.42 to 18.13 ± 3.18, P = 0.01) modes in the intervention group significantly increased, whereas the scores of self-concept (42.10 ± 4.71 to 39.40 ± 5.67, P = 0.21) and interdependence (16.76 ± 2.22 to 16.30 ± 2.57, P = 0.44) modes in the two stages did not have a significant difference. Findings of this research indicated that the Roy's adaptation model-guided education promoted the adaptation level of physiologic and role function modes in veterans with lower extremities amputation. However, this intervention could not promote adaptation in self-concept and interdependence modes. More intervention is advised based on Roy's adaptation model for improving the adaptation of veterans with lower extremities.
Localization Transition Induced by Learning in Random Searches
NASA Astrophysics Data System (ADS)
Falcón-Cortés, Andrea; Boyer, Denis; Giuggioli, Luca; Majumdar, Satya N.
2017-10-01
We solve an adaptive search model where a random walker or Lévy flight stochastically resets to previously visited sites on a d -dimensional lattice containing one trapping site. Because of reinforcement, a phase transition occurs when the resetting rate crosses a threshold above which nondiffusive stationary states emerge, localized around the inhomogeneity. The threshold depends on the trapping strength and on the walker's return probability in the memoryless case. The transition belongs to the same class as the self-consistent theory of Anderson localization. These results show that similarly to many living organisms and unlike the well-studied Markovian walks, non-Markov movement processes can allow agents to learn about their environment and promise to bring adaptive solutions in search tasks.
Oviedo-Trespalacios, Oscar; Haque, Md Mazharul; King, Mark; Washington, Simon
2018-05-29
This study investigated how situational characteristics typically encountered in the transport system influence drivers' perceived likelihood of engaging in mobile phone multitasking. The impacts of mobile phone tasks, perceived environmental complexity/risk, and drivers' individual differences were evaluated as relevant individual predictors within the behavioral adaptation framework. An innovative questionnaire, which includes randomized textual and visual scenarios, was administered to collect data from a sample of 447 drivers in South East Queensland-Australia (66% females; n = 296). The likelihood of engaging in a mobile phone task across various scenarios was modeled by a random parameters ordered probit model. Results indicated that drivers who are female, are frequent users of phones for texting/answering calls, have less favorable attitudes towards safety, and are highly disinhibited were more likely to report stronger intentions of engaging in mobile phone multitasking. However, more years with a valid driving license, self-efficacy toward self-regulation in demanding traffic conditions and police enforcement, texting tasks, and demanding traffic conditions were negatively related to self-reported likelihood of mobile phone multitasking. The unobserved heterogeneity warned of riskier groups among female drivers and participants who need a lot of convincing to believe that multitasking while driving is dangerous. This research concludes that behavioral adaptation theory is a robust framework explaining self-regulation of distracted drivers. © 2018 Society for Risk Analysis.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
Subspace Methods for Massive and Messy Data
2017-07-12
Subspace Methods for Massive and Messy Data The views, opinions and/or findings contained in this report are those of the author(s) and should not...AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 REPORT DOCUMENTATION PAGE 11. SPONSOR...Number: W911NF-14-1-0634 Organization: University of Michigan - Ann Arbor Title: Subspace Methods for Massive and Messy Data Report Term: 0-Other
Geometry aware Stationary Subspace Analysis
2016-11-22
approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface ( BCI ...context of brain computer interface ( BCI ) data analysis, two such note-worthy methods are stationary subspace analysis (SSA) (von Bünau et al., 2009a... BCI systems, is sCSP. Its goal is to project the data onto a subspace in which the various data classes are more separable. The sCSP method directs
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Locally indistinguishable subspaces spanned by three-qubit unextendible product bases
NASA Astrophysics Data System (ADS)
Duan, Runyao; Xin, Yu; Ying, Mingsheng
2010-03-01
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that the local distinguishability of multiqubit states can be decided efficiently. Another useful consequence is that a set of orthogonal n-qubit states is locally distinguishable only if the summation of their orthogonal Schmidt numbers is less than the total dimension 2n. Employing these results, we show that any orthonormal basis of a subspace spanned by arbitrary three-qubit orthogonal unextendible product bases (UPB) cannot be exactly distinguishable by local operations and classical communication. This not only reveals another intrinsic property of three-qubit orthogonal UPB but also provides a class of locally indistinguishable subspaces with dimension 4. We also explicitly construct locally indistinguishable subspaces with dimensions 3 and 5, respectively. Similar to the bipartite case, these results on multipartite locally indistinguishable subspaces can be used to estimate the one-shot environment-assisted classical capacity of a class of quantum broadcast channels.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
Observation of entanglement witnesses for orbital angular momentum states
NASA Astrophysics Data System (ADS)
Agnew, M.; Leach, J.; Boyd, R. W.
2012-06-01
Entanglement witnesses provide an efficient means of determining the level of entanglement of a system using the minimum number of measurements. Here we demonstrate the observation of two-dimensional entanglement witnesses in the high-dimensional basis of orbital angular momentum (OAM). In this case, the number of potentially entangled subspaces scales as d(d - 1)/2, where d is the dimension of the space. The choice of OAM as a basis is relevant as each subspace is not necessarily maximally entangled, thus providing the necessary state for certain tests of nonlocality. The expectation value of the witness gives an estimate of the state of each two-dimensional subspace belonging to the d-dimensional Hilbert space. These measurements demonstrate the degree of entanglement and therefore the suitability of the resulting subspaces for quantum information applications.
Hyperspectral image analysis for standoff trace detection using IR laser spectroscopy
NASA Astrophysics Data System (ADS)
Jarvis, J.; Fuchs, F.; Hugger, S.; Ostendorf, R.; Butschek, L.; Yang, Q.; Dreyhaupt, A.; Grahmann, J.; Wagner, J.
2016-05-01
In the recent past infrared laser backscattering spectroscopy using Quantum Cascade Lasers (QCL) emitting in the molecular fingerprint region between 7.5 μm and 10 μm proved a highly promising approach for stand-off detection of dangerous substances. In this work we present an active illumination hyperspectral image sensor, utilizing QCLs as spectral selective illumination sources. A high performance Mercury Cadmium Telluride (MCT) imager is used for collection of the diffusely backscattered light. Well known target detection algorithms like the Adaptive Matched Subspace Detector and the Adaptive Coherent Estimator are used to detect pixel vectors in the recorded hyperspectral image that contain traces of explosive substances like PETN, RDX or TNT. In addition we present an extension of the backscattering spectroscopy technique towards real-time detection using a MOEMS EC-QCL.
Entanglement dynamics of coupled qubits and a semi-decoherence free subspace
NASA Astrophysics Data System (ADS)
Campagnano, Gabriele; Hamma, Alioscia; Weiss, Ulrich
2010-01-01
We study the entanglement dynamics and relaxation properties of a system of two interacting qubits in the cases of (I) two independent bosonic baths and (II) one common bath. We find that in the case (II) the existence of a decoherence-free subspace (DFS) makes entanglement dynamics very rich. We show that when the system is initially in a state with a component in the DFS the relaxation time is surprisingly long, showing the existence of semi-decoherence free subspaces.
Hero/Heroine Modeling for Puerto Rican Adolescents: A Preventive Mental Health Intervention.
ERIC Educational Resources Information Center
Malgady, Robert G.; And Others
1990-01-01
Developed hero/heroine intervention based on adult Puerto Rican role models to foster ethnic identity, self-concept, and adaptive coping behavior. Screened 90 Puerto Rican eighth and ninth graders for presenting behavior problems in school and randomly assigned them to intervention or control groups. After 19 sessions, intervention significantly…
Hepark, Sevket; Janssen, Lotte; de Vries, Alicia; Schoenberg, Poppy L A; Donders, Rogier; Kan, Cornelis C; Speckens, Anne E M
2015-11-20
The aim of this study was to examine the effectiveness of mindfulness as a treatment for adults diagnosed with ADHD. A 12-week-adapted mindfulness-based cognitive therapy (MBCT) program is compared with a waiting list (WL) group. Adults with ADHD were randomly allocated to MBCT (n = 55) or waitlist (n = 48). Outcome measures included investigator-rated ADHD symptoms (primary), self-reported ADHD symptoms, executive functioning, depressive and anxiety symptoms, patient functioning, and mindfulness skills. MBCT resulted in a significant reduction of ADHD symptoms, both investigator-rated and self-reported, based on per-protocol and intention-to-treat analyses. Significant improvements in executive functioning and mindfulness skills were found. Additional analyses suggested that the efficacy of MBCT in reducing ADHD symptoms and improving executive functioning is partially mediated by an increase in the mindfulness skill "Act With Awareness." No improvements were observed for depressive and anxiety symptoms, and patient functioning. This study provides preliminary support for the effectiveness of MBCT for adults with ADHD. © The Author(s) 2015.
Heuristic approach to image registration
NASA Astrophysics Data System (ADS)
Gertner, Izidor; Maslov, Igor V.
2000-08-01
Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.
A real negative selection algorithm with evolutionary preference for anomaly detection
NASA Astrophysics Data System (ADS)
Yang, Tao; Chen, Wen; Li, Tao
2017-04-01
Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.
NASA Astrophysics Data System (ADS)
Sim, Sung-Han; Spencer, Billie F., Jr.; Park, Jongwoong; Jung, Hyungjo
2012-04-01
Wireless Smart Sensor Networks (WSSNs) facilitates a new paradigm to structural identification and monitoring for civil infrastructure. Conventional monitoring systems based on wired sensors and centralized data acquisition and processing have been considered to be challenging and costly due to cabling and expensive equipment and maintenance costs. WSSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. Thus, several system identification methods have been implemented to process sensor data and extract essential information, including Natural Excitation Technique with Eigensystem Realization Algorithm, Frequency Domain Decomposition (FDD), and Random Decrement Technique (RDT); however, Stochastic Subspace Identification (SSI) has not been fully utilized in WSSNs, while SSI has the strong potential to enhance the system identification. This study presents a decentralized system identification using SSI in WSSNs. The approach is implemented on MEMSIC's Imote2 sensor platform and experimentally verified using a 5-story shear building model.
Low rank approach to computing first and higher order derivatives using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.
2012-07-01
This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less
DeJoy, David M.; Vandenberg, Robert J.; Corso, Phaedra; Padilla, Heather; Zuercher, Heather
2016-01-01
Objective To evaluate the effectiveness of the Fuel Your Life program, an adaptation of the Diabetes Prevention Program, utilizing implementation strategies commonly used in worksite programs – telephone coaching, small group coaching and self-study. Methods The primary outcomes of BMI and weight were examined in a randomized control trial conducted with city/county employees. Results Although the majority of participants in all three groups lost some weight, the phone group lost significantly more weight (4.9 lbs.), followed by the small groups (3.4 lbs.) and the self-study (2.7 lbs.). Of the total participants, 28.3% of the phone group, 20.6% of the small group and 15.7 of the self-study group lost 5% or more of their body weight. Conclusions Fuel Your Life (DPP) can be effectively disseminated using different implementation strategies that are tailored to the workplace. PMID:27820761
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Carvajal, Gonzalo; Figueroa, Miguel
2014-07-01
Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cowley, Benjamin; Holmström, Édua; Juurmaa, Kristiina; Kovarskis, Levas; Krause, Christina M.
2016-01-01
Background: We report a randomized controlled clinical trial of neurofeedback therapy intervention for ADHD/ADD in adults. We focus on internal mechanics of neurofeedback learning, to elucidate the primary role of cortical self-regulation in neurofeedback. We report initial results; more extensive analysis will follow. Methods: Trial has two phases: intervention and follow-up. The intervention consisted of neurofeedback treatment, including intake and outtake measurements, using a waiting-list control group. Treatment involved ~40 h-long sessions 2–5 times per week. Training involved either theta/beta or sensorimotor-rhythm regimes, adapted by adding a novel “inverse-training” condition to promote self-regulation. Follow-up (ongoing) will consist of self-report and executive function tests. Setting: Intake and outtake measurements were conducted at University of Helsinki. Treatment was administered at partner clinic Mental Capital Care, Helsinki. Randomization: We randomly allocated half the sample then adaptively allocated the remainder to minimize baseline differences in prognostic variables. Blinding: Waiting-list control design meant trial was not blinded. Participants: Fifty-four adult Finnish participants (mean age 36 years; 29 females) were recruited after screening by psychiatric review. Forty-four had ADHD diagnoses, 10 had ADD. Measurements: Symptoms were assessed by computerized attention test (T.O.V.A.) and self-report scales, at intake and outtake. Performance during neurofeedback trials was recorded. Results: Participants were recruited and completed intake measurements during summer 2012, before assignment to treatment and control, September 2012. Outtake measurements ran April-August 2013. After dropouts, 23 treatment and 21 waiting-list participants remained for analysis. Initial analysis showed that, compared to waiting-list control, neurofeedback promoted improvement of self-reported ADHD symptoms, but did not show transfer of learning to T.O.V.A. Comprehensive analysis will be reported elsewhere. Trial Registration: “Computer Enabled Neuroplasticity Treatment (CENT),” ISRCTN13915109. PMID:27242472
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Elongation cutoff technique armed with quantum fast multipole method for linear scaling.
Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko
2009-11-30
A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.
DeFife, Jared A; Goldberg, Melissa; Westen, Drew
2015-04-01
Central to the proposed DSM-5 general definition of personality disorder (PD) are features of self- and interpersonal functioning. The Social Cognition and Object Relations Scale-Global Rating Method (SCORS-G) is a coding system that assesses eight dimensions of self- and relational experience that can be applied to narrative data or used by clinically experienced observers to quantify observations of patients in ongoing psychotherapy. This study aims to evaluate the relationship of SCORS-G dimensions to personality pathology in adolescents and their incremental validity for predicting multiple domains of adaptive functioning. A total of 294 randomly sampled doctoral-level clinical psychologists and psychiatrists described an adolescent patient in their care based on all available data. Individual SCORS-G variables demonstrated medium-to-large effect size differences for PD versus non-PD identified adolescents (d = .49-1.05). A summary SCORS-Composite rating was significantly related to composite measurements of global adaptive functioning (r = .66), school functioning (r = .47), externalizing behavior (r = -.49), and prior psychiatric history (r = -.31). The SCORS-Composite significantly predicted variance in domains of adaptive functioning above and beyond age and DSM-IV PD diagnosis (ΔR(2)s = .07-.32). As applied to adolescents, the SCORS-G offers a framework for a clinically meaningful and empirically sound dimensional assessment of self- and other representations and interpersonal functioning capacities. Our findings support the inclusion of self- and interpersonal capacities in the DSM-5 general definition of personality disorder as an improvement to existing PD diagnosis for capturing varied domains of adaptive functioning and psychopathology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Wang, Guiming; Hobbs, N Thompson; Galbraith, Hector; Giesen, Kenneth M
2002-09-01
Global climate change may impact wildlife populations by affecting local weather patterns, which, in turn, can impact a variety of ecological processes. However, it is not clear that local variations in ecological processes can be explained by large-scale patterns of climate. The North Atlantic oscillation (NAO) is a large-scale climate phenomenon that has been shown to influence the population dynamics of some animals. Although effects of the NAO on vertebrate population dynamics have been studied, it remains uncertain whether it broadly predicts the impact of weather on species. We examined the ability of local weather data and the NAO to explain the annual variation in population dynamics of white-tailed ptarmigan ( Lagopus leucurus) in Rocky Mountain National Park, USA. We performed canonical correlation analysis on the demographic subspace of ptarmigan and local-climate subspace defined by the empirical orthogonal function (EOF) using data from 1975 to 1999. We found that two subspaces were significantly correlated on the first canonical variable. The Pearson correlation coefficient of the first EOF values of the demographic and local-climate subspaces was significant. The population density and the first EOF of local-climate subspace influenced the ptarmigan population with 1-year lags in the Gompertz model. However, the NAO index was neither related to the first two EOF of local-climate subspace nor to the first EOF of the demographic subspace of ptarmigan. Moreover, the NAO index was not a significant term in the Gompertz model for the ptarmigan population. Therefore, local climate had stronger signature on the demography of ptarmigan than did a large-scale index, i.e., the NAO index. We conclude that local responses of wildlife populations to changing climate may not be adequately explained by models that project large-scale climatic patterns.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Detecting coupled collective motions in protein by independent subspace analysis
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Joti, Yasumasa; Kitao, Akio
2010-11-01
Protein dynamics evolves in a high-dimensional space, comprising aharmonic, strongly correlated motional modes. Such correlation often plays an important role in analyzing protein function. In order to identify significantly correlated collective motions, here we employ independent subspace analysis based on the subspace joint approximate diagonalization of eigenmatrices algorithm for the analysis of molecular dynamics (MD) simulation trajectories. From the 100 ns MD simulation of T4 lysozyme, we extract several independent subspaces in each of which collective modes are significantly correlated, and identify the other modes as independent. This method successfully detects the modes along which long-tailed non-Gaussian probability distributions are obtained. Based on the time cross-correlation analysis, we identified a series of events among domain motions and more localized motions in the protein, indicating the connection between the functionally relevant phenomena which have been independently revealed by experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Leahy, R.M.
A new method for source localization is described that is based on a modification of the well known multiple signal classification (MUSIC) algorithm. In classical MUSIC, the array manifold vector is projected onto an estimate of the signal subspace, but errors in the estimate can make location of multiple sources difficult. Recursively applied and projected (RAP) MUSIC uses each successively located source to form an intermediate array gain matrix, and projects both the array manifold and the signal subspace estimate into its orthogonal complement. The MUSIC projection is then performed in this reduced subspace. Using the metric of principal angles,more » the authors describe a general form of the RAP-MUSIC algorithm for the case of diversely polarized sources. Through a uniform linear array simulation, the authors demonstrate the improved Monte Carlo performance of RAP-MUSIC relative to MUSIC and two other sequential subspace methods, S and IES-MUSIC.« less
Independence and totalness of subspaces in phase space methods
NASA Astrophysics Data System (ADS)
Vourdas, A.
2018-04-01
The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Dytrych, T.; Maris, P.; Launey, K. D.; ...
2016-06-22
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dytrych, T.; Maris, Pieter; Launey, K. D.
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
Portfolios and the market geometry
NASA Astrophysics Data System (ADS)
Eleutério, Samuel; Araújo, Tanya; Vilela Mendes, R.
2014-09-01
A geometric analysis of return time series, performed in the past, implied that most of the systematic information in the market is contained in a space of small dimension. Here we have explored subspaces of this space to find out the relative performance of portfolios formed from companies that have the largest projections in each one of the subspaces. As expected, it was found that the best performance portfolios are associated with some of the small eigenvalue subspaces and not to the dominant dimensions. This is found to occur in a systematic fashion over an extended period (1990-2008).
Integrated head package cable carrier for a nuclear power plant
Meuschke, Robert E.; Trombola, Daniel M.
1995-01-01
A cabling arrangement is provided for a nuclear reactor located within a containment. Structure inside the containment is characterized by a wall having a near side surrounding the reactor vessel defining a cavity, an operating deck outside the cavity, a sub-space below the deck and on a far side of the wall spaced from the near side, and an operating area above the deck. The arrangement includes a movable frame supporting a plurality of cables extending through the frame, each connectable at a first end to a head package on the reactor vessel and each having a second end located in the sub-space. The frame is movable, with the cables, between a first position during normal operation of the reactor when the cables are connected to the head package, located outside the sub-space proximate the head package, and a second position during refueling when the cables are disconnected from the head package, located in the sub-space. In a preferred embodiment, the frame straddles the top of the wall in a substantially horizontal orientation in the first position, pivots about an end distal from the head package to a substantially vertically oriented intermediate position, and is guided, while remaining about vertically oriented, along a track in the sub-space to the second position.
Low-Rank Tensor Subspace Learning for RGB-D Action Recognition.
Jia, Chengcheng; Fu, Yun
2016-07-09
Since RGB-D action data inherently equip with extra depth information compared with RGB data, recently many works employ RGB-D data in a third-order tensor representation containing spatio-temporal structure to find a subspace for action recognition. However, there are two main challenges of these methods. First, the dimension of subspace is usually fixed manually. Second, preserving local information by finding intraclass and inter-class neighbors from a manifold is highly timeconsuming. In this paper, we learn a tensor subspace, whose dimension is learned automatically by low-rank learning, for RGB-D action recognition. Particularly, the tensor samples are factorized to obtain three Projection Matrices (PMs) by Tucker Decomposition, where all the PMs are performed by nuclear norm in a close-form to obtain the tensor ranks which are used as tensor subspace dimension. Additionally, we extract the discriminant and local information from a manifold using a graph constraint. This graph preserves the local knowledge inherently, which is faster than the previous way by calculating both the intra-class and inter-class neighbors of each sample. We evaluate the proposed method on four widely used RGB-D action datasets including MSRDailyActivity3D, MSRActionPairs, MSRActionPairs skeleton and UTKinect-Action3D datasets, and the experimental results show higher accuracy and efficiency of the proposed method.
Unlü, Burçin; Riper, Heleen; van Straten, Annemieke; Cuijpers, Pim
2010-11-04
The Turkish population living in The Netherlands has a high prevalence of psychological complaints and has a high threshold for seeking professional help for these problems. Seeking help through the Internet can overcome these barriers. This project aims to evaluate the effectiveness of a guided self-help problem-solving intervention for depressed Turkish migrants that is culturally adapted and web-based. This study is a randomized controlled trial with two arms: an experimental condition group and a wait list control group. The experimental condition obtains direct access to the guided web-based self-help intervention, which is based on Problem Solving Treatment (PST) and takes 6 weeks to complete. Turkish adults with mild to moderate depressive symptoms will be recruited from the general population and the participants can choose between a Turkish and a Dutch version. The primary outcome measure is the reduction of depressive symptoms, the secondary outcome measures are somatic symptoms, anxiety, acculturation, quality of life and satisfaction. Participants are assessed at baseline, post-test (6 weeks), and 4 months after baseline. Analysis will be conducted on the intention-to-treat sample. This study evaluates the effectiveness of a guided problem-solving intervention for Turkish adults living in The Netherlands that is culturally adapted and web-based. Nederlands Trial Register: NTR2303.
A Perron-Frobenius type of theorem for quantum operations
NASA Astrophysics Data System (ADS)
Lagro, Matthew
Quantum random walks are a generalization of classical Markovian random walks to a quantum mechanical or quantum computing setting. Quantum walks have promising applications but are complicated by quantum decoherence. We prove that the long-time limiting behavior of the class of quantum operations which are the convex combination of norm one operators is governed by the eigenvectors with norm one eigenvalues which are shared by the operators. This class includes all operations formed by a coherent operation with positive probability of orthogonal measurement at each step. We also prove that any operation that has range contained in a low enough dimension subspace of the space of density operators has limiting behavior isomorphic to an associated Markov chain. A particular class of such operations are coherent operations followed by an orthogonal measurement. Applications of the convergence theorems to quantum walks are given.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1996-02-01
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting from the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1995-12-31
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting form the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been
Assessment of Physical Activity by Applying IPAQ Questionnaire
ERIC Educational Resources Information Center
Biernat, Elzbieta; Stupnicki, Romuald; Lebiedzinski, Bartlomiej; Janczewska, Lidia
2008-01-01
Study aim: To assess the suitability of the short 7-day IPAQ (self-completed) adapted to Polish population. Material and methods: Two surveys were conducted in 2005 on 296 random subjects (aged 20-60 years) from Warsaw and the Mazowiecki region. From these, 54 men and 79 women were requested to fill questionnaires, and 70 men and 93 women, were…
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
Experimental state control by fast non-Abelian holonomic gates with a superconducting qutrit
NASA Astrophysics Data System (ADS)
Danilin, S.; Vepsäläinen, A.; Paraoanu, G. S.
2018-05-01
Quantum state manipulation with gates based on geometric phases acquired during cyclic operations promises inherent fault-tolerance and resilience to local fluctuations in the control parameters. Here we create a general non-Abelian and non-adiabatic holonomic gate acting in the (∣0〉, ∣2〉) subspace of a three-level (qutrit) transmon device fabricated in a fully coplanar design. Experimentally, this is realized by simultaneously coupling the first two transitions by microwave pulses with amplitudes and phases defined such that the condition of parallel transport is fulfilled. We demonstrate the creation of arbitrary superpositions in this subspace by changing the amplitudes of the pulses and the relative phase between them. We use two-photon pulses acting in the holonomic subspace to reveal the coherence of the state created by the geometric gate pulses and to prepare different superposition states. We also test the action of holonomic NOT and Hadamard gates on superpositions in the (| 0> ,| 2> ) subspace.
NASA Astrophysics Data System (ADS)
Zhang, Peng; Peng, Jing; Sims, S. Richard F.
2005-05-01
In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.
Improvement in the Accuracy of Matching by Different Feature Subspaces in Traffic Sign Recognition
NASA Astrophysics Data System (ADS)
Ihara, Arihito; Fujiyoshi, Hironobu; Takaki, Masanari; Kumon, Hiroaki; Tamatsu, Yukimasa
A technique for recognizing traffic signs from an image taken with an in-vehicle camera has already been proposed as driver's drive assist. SIFT feature is used for traffic sign recognition, because it is robust to changes in scaling and rotating of the traffic sign. However, it is difficult to process in real-time because the computation cost of the SIFT feature extraction and matching is expensive. This paper presents a method of traffic sign recognition based on keypoint classifier by AdaBoost using PCA-SIFT features in different feature subspaces. Each subspace is constructed from gradients of traffic sign images and general images respectively. A detected keypoint is projected to both subspaces, and then the AdaBoost employs to classy into whether the keypoint is on the traffic sign or not. Experimental results show that the computation cost for keypoint matching can be reduced to about 1/2 compared with the conventional method.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
NASA Astrophysics Data System (ADS)
Macieszczak, Katarzyna; Zhou, YanLi; Hofferberth, Sebastian; Garrahan, Juan P.; Li, Weibin; Lesanovsky, Igor
2017-10-01
We investigate the dynamics of a generic interacting many-body system under conditions of electromagnetically induced transparency (EIT). This problem is of current relevance due to its connection to nonlinear optical media realized by Rydberg atoms. In an interacting system the structure of the dynamics and the approach to the stationary state becomes far more complex than in the case of conventional EIT. In particular, we discuss the emergence of a metastable decoherence-free subspace, whose dimension for a single Rydberg excitation grows linearly in the number of atoms. On approach to stationarity this leads to a slow dynamics, which renders the typical assumption of fast relaxation invalid. We derive analytically the effective nonequilibrium dynamics in the decoherence-free subspace, which features coherent and dissipative two-body interactions. We discuss the use of this scenario for the preparation of collective entangled dark states and the realization of general unitary dynamics within the spin-wave subspace.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Future self-continuity is associated with improved health and increases exercise behavior.
Rutchick, Abraham M; Slepian, Michael L; Reyes, Monica O; Pleskus, Lindsay N; Hershfield, Hal E
2018-03-01
To the extent that people feel more continuity between their present and future selves, they are more likely to make decisions with the future self in mind. The current studies examined future self-continuity in the context of health. In Study 1, people reported the extent to which they felt similar and connected to their future self; people with more present-future continuity reported having better subjective health across a variety of measures. In Study 2, people were randomly assigned to write a letter to themselves either three months or 20 years into the future; people for whom continuity with the distant future self was enhanced exercised more in the days following the writing task. These findings suggest that future self-continuity promotes adaptive long-term health behavior, suggesting the promise of interventions enhancing future self-continuity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A Kalman Filter for SINS Self-Alignment Based on Vector Observation.
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-29
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.
A Kalman Filter for SINS Self-Alignment Based on Vector Observation
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-01
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q-method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate. PMID:28146059
Horror Image Recognition Based on Context-Aware Multi-Instance Learning.
Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng
2015-12-01
Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
On spectral synthesis on zero-dimensional Abelian groups
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2013-09-01
Let G be a zero-dimensional locally compact Abelian group all of whose elements are compact, and let C(G) be the space of all complex-valued continuous functions on G. A closed linear subspace \\mathscr H\\subseteq C(G) is said to be an invariant subspace if it is invariant with respect to the translations \\tau_y\\colon f(x)\\mapsto f(x+y), y\\in G. In the paper, it is proved that any invariant subspace \\mathscr H admits spectral synthesis, that is, \\mathscr H coincides with the closed linear span of the characters of G belonging to \\mathscr H. Bibliography: 25 titles.
Conditioned invariant subspaces, disturbance decoupling and solutions of rational matrix equations
NASA Technical Reports Server (NTRS)
Li, Z.; Sastry, S. S.
1986-01-01
Conditioned invariant subspaces are introduced both in terms of output injection and in terms of state estimation. Various properties of these subspaces are explored and the problem of disturbance decoupling by output injection (OIP) is defined. It is then shown that OIP is equivalent to the problem of disturbance decoupled estimation as introduced in Willems (1982) and Willems and Commault (1980). Both solvability conditions and a description of solutions for a class of rational matrix equations of the form X(s)M(s) = Q(s) on several ways are given in state-space form. Finally, the problem of output stabilization with respect to a disturbance is briefly addressed.
Essential uncontrollability of discrete linear, time-invariant, dynamical systems
NASA Technical Reports Server (NTRS)
Cliff, E. M.
1975-01-01
The concept of a 'best approximating m-dimensional subspace' for a given set of vectors in n-dimensional whole space is introduced. Such a subspace is easily described in terms of the eigenvectors of an associated Gram matrix. This technique is used to approximate an achievable set for a discrete linear time-invariant dynamical system. This approximation characterizes the part of the state space that may be reached using modest levels of control. If the achievable set can be closely approximated by a proper subspace of the whole space then the system is 'essentially uncontrollable'. The notion finds application in studies of failure-tolerant systems, and in decoupling.
Holton, M Kim; Barry, Adam E; Chaney, J Don
2015-01-01
Employees commonly report feeling stressed at work. Examine how employees cope with work and personal stress, whether their coping strategies are adaptive (protective to health) or maladaptive (detrimental to health), and if the manner in which employees cope with stress influences perceived stress management. In this cross-sectional study, a random sample of 2,500 full-time university non-student employees (i.e. faculty, salaried professionals, and hourly non-professionals) were surveyed on health related behaviors including stress and coping. Approximately 1,277 completed the survey (51% ). Hierarchical logistic regression was used to assess the ability of adaptive and maladaptive coping strategies to predict self-reported stress management, while controlling for multiple demographic variables. Over half of employees surveyed reported effective stress management. Most frequently used adaptive coping strategies were communication with friend/family member and exercise, while most frequently used maladaptive coping strategies were drinking alcohol and eating more than usual. Both adaptive and maladaptive coping strategies made significant (p < 0.05) contributions to predicting employee's perceived stress management. Only adaptive coping strategies (B = 0.265) predicted whether someone would self-identify as effectively managing stress. Use of maladaptive coping strategies decreased likelihood of self-reporting effective stress management. Actual coping strategies employed may influence employees' perceived stress management. Adaptive coping strategies may be more influential than maladaptive coping strategies on perceived stress management. Results illustrate themes for effective workplace stress management programs. Stress management programs focused on increasing use of adaptive coping may have a greater impact on employee stress management than those focused on decreasing use of maladaptive coping. Coping is not only a reaction to stressful experiences but also a consequence of coping resources. Thereby increasing the availability of resources in the workplace to facilitate the use of adaptive coping strategies is necessary for successful stress management and, ultimately, healthier employees.
Cultural adaptation of the short Self-Regulation Questionnaire: suggestions for the speech area.
Almeida, Anna Alice; Behlau, Mara
2017-08-14
To present the translated and linguistic and culturally adapted version of the Short self-regulation Questionnaire (SSRQ) for the Brazilian Portuguese and to check its applicability to patients with dysphonia. The SSRQ is a tool used to evaluate the ability to self-regulate behavior; it has 31 items and generates three scores: total index of individual self-regulation capacity and partial scores for goal setting and impulse control. Each item should be scored by means of a Likert-type 5-point scale; the total score ranges from 29 to 145 points. The original instrument was translated and culturally adapted to Brazilian Portuguese by two English-speaking speech therapists who combined their translations and made linguistic adjustments to compose a single final version. This version was back-translated by a third speech therapist with experience in validation studies and without knowledge of the original instrument. The translation and the back-translation were compared with each other and with the original English version by five speech therapists that reached a consensus on additional changes. In this way, the final version was produced. This was called "Questionário Reduzido de Autorregulação" (QRAR). The QRAR was applied to 45 randomly chosen subjects with and without dysphonia in a teaching clinic. No item had to be eliminated, since the respondents did not find it difficult to indicate their answers. The "Questionário Reduzido de Autorregulação" (QRAR) has been successfully translated and culturally and linguistically adapted to Brazilian and Portuguese and can be applied to individuals with voice problems.
Adaptive Sniping for Volatile and Stable Continuous Double Auction Markets
NASA Astrophysics Data System (ADS)
Toft, I. E.; Bagnall, A. J.
This paper introduces a new adaptive sniping agent for the Continuous Double Auction. We begin by analysing the performance of the well known Kaplan sniper in two extremes of market conditions. We generate volatile and stable market conditions using the well known Zero Intelligence-Constrained agent and a new zero-intelligence agent Small Increment (SI). ZI-C agents submit random but profitable bids/offers and cause high volatility in prices and individual trader performance. Our new zero-intelligence agent, SI, makes small random adjustments to the outstanding bid/offer and hence is more cautious than ZI-C. We present results for SI in self-play and then analyse Kaplan in volatile and stable markets. We demonstrate that the non-adaptive Kaplan sniper can be configured to suit either market conditions, but no single configuration is performs well across both market types. We believe that in a dynamic auction environment where current or future market conditions cannot be predicted a viable sniping strategy should adapt its behaviour to suit prevailing market conditions. To this end, we propose the Adaptive Sniper (AS) agent for the CDA. AS traders classify sniping opportunities using a statistical model of market activity and adjust their classification thresholds using a Widrow-Hoff adapted search. Our AS agent requires little configuration, and outperforms the original Kaplan sniper in volatile and stable markets, and in a mixed trader type scenario that includes adaptive strategies from the literature.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew
2010-06-01
A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less
A Problem-Centered Approach to Canonical Matrix Forms
ERIC Educational Resources Information Center
Sylvestre, Jeremy
2014-01-01
This article outlines a problem-centered approach to the topic of canonical matrix forms in a second linear algebra course. In this approach, abstract theory, including such topics as eigenvalues, generalized eigenspaces, invariant subspaces, independent subspaces, nilpotency, and cyclic spaces, is developed in response to the patterns discovered…
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
Self-transcendence and spiritual well-being in the Amish.
Sharpnack, Patricia A; Quinn Griffin, Mary T; Benders, Alison M; Fitzpatrick, Joyce J
2011-06-01
Self-transcendence, the ability to expand one's relationship to others and the environment, has been found to provide hope which helps a person adapt and cope with illness. Spiritual well-being, the perception of health and wholeness, can boost self-confidence and self esteem. The purpose of this descriptive correlational study was to describe the relationship between self-transcendence and spiritual well-being in adult Amish. A random sample of Old Order Amish was surveyed by postal mail; there were 134 respondents. Two valid and reliable questionnaires were used to measure the key variables. The participants had high levels of self-transcendence and spiritual well-being and there was a statistically significant positive relationship between the two variables. The findings from this study will increase nurses' awareness of the holistic nature of the Amish beliefs and assist nurses in serving this population. Additional research is needed to develop further understanding of the study variables among the Amish.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2017-09-01
A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.
Rowe, Sarah L; Patel, Krisna; French, Rebecca S; Henderson, Claire; Ougrin, Dennis; Slade, Mike; Moran, Paul
2018-01-30
Adolescents who self-harm are often unsure how or where to get help. We developed a Web-based personalized decision aid (DA) designed to support young people in decision making about seeking help for their self-harm. The aim of this study was to evaluate the feasibility and acceptability of the DA intervention and the randomized controlled trial (RCT) in a school setting. We conducted a two-group, single blind, randomized controlled feasibility trial in a school setting. Participants aged 12 to 18 years who reported self-harm in the past 12 months were randomized to either a Web-based DA or to general information about mood and feelings. Feasibility of recruitment, randomization, and follow-up rates were assessed, as was acceptability of the intervention and study procedures. Descriptive data were collected on outcome measures examining decision making and help-seeking behavior. Qualitative interviews were conducted with young people, parents or carers, and staff and subjected to thematic analysis to explore their views of the DA and study processes. Parental consent was a significant barrier to young people participating in the trial, with only 17.87% (208/1164) of parents or guardians who were contacted for consent responding to study invitations. Where parental consent was obtained, we were able to recruit 81.7% (170/208) of young people into the study. Of those young people screened, 13.5% (23/170) had self-harmed in the past year. Ten participants were randomized to receiving the DA, and 13 were randomized to the control group. Four-week follow-up assessments were completed with all participants. The DA had good acceptability, but qualitative interviews suggested that a DA that addressed broader mental health problems such as depression, anxiety, and self-harm may be more beneficial. A broad-based mental health DA addressing a wide range of psychosocial problems may be useful for young people. The requirement for parental consent is a key barrier to intervention research on self-harm in the school setting. Adaptations to the research design and the intervention are needed before generalizable research about DAs can be successfully conducted in a school setting. International Standard Randomized Controlled Trial registry: ISRCTN11230559; http://www.isrctn.com/ISRCTN11230559 (Archived by WebCite at http://www.webcitation.org/6wqErsYWG). ©Sarah L Rowe, Krisna Patel, Rebecca S French, Claire Henderson, Dennis Ougrin, Mike Slade, Paul Moran. Originally published in JMIR Mental Health (http://mental.jmir.org), 30.01.2018.
Patel, Krisna; French, Rebecca S; Henderson, Claire; Ougrin, Dennis; Slade, Mike; Moran, Paul
2018-01-01
Background Adolescents who self-harm are often unsure how or where to get help. We developed a Web-based personalized decision aid (DA) designed to support young people in decision making about seeking help for their self-harm. Objective The aim of this study was to evaluate the feasibility and acceptability of the DA intervention and the randomized controlled trial (RCT) in a school setting. Methods We conducted a two-group, single blind, randomized controlled feasibility trial in a school setting. Participants aged 12 to 18 years who reported self-harm in the past 12 months were randomized to either a Web-based DA or to general information about mood and feelings. Feasibility of recruitment, randomization, and follow-up rates were assessed, as was acceptability of the intervention and study procedures. Descriptive data were collected on outcome measures examining decision making and help-seeking behavior. Qualitative interviews were conducted with young people, parents or carers, and staff and subjected to thematic analysis to explore their views of the DA and study processes. Results Parental consent was a significant barrier to young people participating in the trial, with only 17.87% (208/1164) of parents or guardians who were contacted for consent responding to study invitations. Where parental consent was obtained, we were able to recruit 81.7% (170/208) of young people into the study. Of those young people screened, 13.5% (23/170) had self-harmed in the past year. Ten participants were randomized to receiving the DA, and 13 were randomized to the control group. Four-week follow-up assessments were completed with all participants. The DA had good acceptability, but qualitative interviews suggested that a DA that addressed broader mental health problems such as depression, anxiety, and self-harm may be more beneficial. Conclusions A broad-based mental health DA addressing a wide range of psychosocial problems may be useful for young people. The requirement for parental consent is a key barrier to intervention research on self-harm in the school setting. Adaptations to the research design and the intervention are needed before generalizable research about DAs can be successfully conducted in a school setting. Trial Registration International Standard Randomized Controlled Trial registry: ISRCTN11230559; http://www.isrctn.com/ISRCTN11230559 (Archived by WebCite at http://www.webcitation.org/6wqErsYWG) PMID:29382626
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
BI-sparsity pursuit for robust subspace recovery
Bian, Xiao; Krim, Hamid
2015-09-01
Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less
A continuum theory of edge dislocations
NASA Astrophysics Data System (ADS)
Berdichevsky, V. L.
2017-09-01
Continuum theory of dislocation aims to describe the behavior of large ensembles of dislocations. This task is far from completion, and, most likely, does not have a "universal solution", which is applicable to any dislocation ensemble. In this regards it is important to have guiding lines set by benchmark cases, where the transition from a discrete set of dislocations to a continuum description is made rigorously. Two such cases have been considered recently: equilibrium of dislocation walls and screw dislocations in beams. In this paper one more case is studied, equilibrium of a large set of 2D edge dislocations placed randomly in a 2D bounded region. The major characteristic of interest is energy of dislocation ensemble, because it determines the structure of continuum equations. The homogenized energy functional is obtained for the periodic dislocation ensembles with a random contents of the periodic cell. Parameters of the periodic structure can change slowly on distances of order of the size of periodic cells. The energy functional is obtained by the variational-asymptotic method. Equilibrium positions are local minima of energy. It is confirmed the earlier assertion that energy density of the system is the sum of elastic energy of averaged elastic strains and microstructure energy, which is elastic energy of the neutralized dislocation system, i.e. the dislocation system placed in a constant dislocation density field making the averaged dislocation density zero. The computation of energy is reduced to solution of a variational cell problem. This problem is solved analytically. The solution is used to investigate stability of simple dislocation arrays, i.e. arrays with one dislocation in the periodic cell. The relations obtained yield two outcomes: First, there is a state parameter of the system, dislocation polarization; averaged stresses affect only dislocation polarization and cannot change other characteristics of the system. Second, the structure of dislocation phase space is strikingly simple. Dislocation phase space is split in a family of subspaces corresponding to constant values of dislocation polarizations; in each equipolarization subspace there are many local minima of energy; for zero external stresses the system is stuck in a local minimum of energy; for non-zero slowly changing external stress, dislocation polarization evolves, while the system moves over local energy minima of equipolarization subspaces. Such a simple picture of dislocation dynamics is due to the presence of two time scales, slow evolution of dislocation polarization and fast motion of the system over local minima of energy. The existence of two time scales is justified for a neutral system of edge dislocations.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors
NASA Astrophysics Data System (ADS)
Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz
2016-04-01
The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue, therefore providing an evaluation of network performance and efficiency of the method.
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Qu, Jinxiu; Zhang, Chenxuan; Cheng, Wei; Li, Bing
2014-06-01
Viscoelastic sandwich structures (VSS) are widely used in mechanical equipment; their state assessment is necessary to detect structural states and to keep equipment running with high reliability. This paper proposes a novel manifold-manifold distance-based assessment (M2DBA) method for assessing the looseness state in VSSs. In the M2DBA method, a manifold-manifold distance is viewed as a health index. To design the index, response signals from the structure are firstly acquired by condition monitoring technology and a Hankel matrix is constructed by using the response signals to describe state patterns of the VSS. Thereafter, a subspace analysis method, that is, principal component analysis (PCA), is performed to extract the condition subspace hidden in the Hankel matrix. From the subspace, pattern changes in dynamic structural properties are characterized. Further, a Grassmann manifold (GM) is formed by organizing a set of subspaces. The manifold is mapped to a reproducing kernel Hilbert space (RKHS), where support vector data description (SVDD) is used to model the manifold as a hypersphere. Finally, a health index is defined as the cosine of the angle between the hypersphere centers corresponding to the structural baseline state and the looseness state. The defined health index contains similarity information existing in the two structural states, so structural looseness states can be effectively identified. Moreover, the health index is derived by analysis of the global properties of subspace sets, which is different from traditional subspace analysis methods. The effectiveness of the health index for state assessment is validated by test data collected from a VSS subjected to different degrees of looseness. The results show that the health index is a very effective metric for detecting the occurrence and extension of structural looseness. Comparison results indicate that the defined index outperforms some existing state-of-the-art ones.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Littlewood, Kerry; Cummings, Doyle M; Lutes, Lesley; Solar, Chelsey
2015-01-01
The purpose of our study was two-fold: 1) adapt and test a social support measure specific to the experiences of African American women with type 2 diabetes mellitus (T2DM); 2) examine its relationship to psychosocial measures. 200 rural African American women with uncontrolled T2DM participating in a randomized controlled trial completed surveys at baseline on their social support, empowerment, self-care, self-efficacy, depression and diabetes distress. Exploratory factor analysis and correlation analysis were conducted to test the psychometric properties of the Dunst Family Support Scale adapted for AA women with T2DM (FSS-AA T2DM) and its relationship with other psychosocial measures. The 16 items of the FSS-AA T2DM loaded onto three distinct factors: parent and spouse/partner support, community and medical support, and extended family and friends support. Reliability for the entire scale was good (Cronbach's α = .90) and was acceptable to high across these three factors (Cronbach's α of .86, .83, and .83 respectively). All three factors were significantly correlated with self-reported empowerment, self-care, self-efficacy, depression and diabetes distress, although the pattern was different for each factor. FSS-AA-T2DM showed good concurrent validity when compared with similar items on the Diabetes Distress Scale. The FSS-AA T2DM, a 16-item scale measuring social support among rural African American women with T2DM, is internally consistent and reliable. Findings support the utility of this screening tool in this population, although additional testing is needed with other groups in additional settings.
The effects of subliminal symbiotic stimulation on free-response and self-report mood.
Weinberger, J; Kelner, S; McClelland, D
1997-10-01
Research has shown that subliminal presentation of MOMMY AND I ARE ONE (MIO) can help improve adaptive functioning. Two experiments tried to determine whether changes in mood, especially free-response mood, could help explain these findings. In one experiment, 20 men were randomly assigned to receive either a subliminal MIO or control stimulus. Results showed predicted effects on a free-response and no effects on a self-report mood measure. In the other experiment, 54 male subjects randomly received one of three subliminal stimuli. They evidenced the same pattern of mood results. Sentential semantics were shown to be relevant to the obtained results. Ascending threshold and 150 forced-choice discrimination trials demonstrated that subjects could not report stimulus content. It was concluded that MIO effects were attributable to unconscious processing of the entire message and that free-response mood may partly mediate these effects. Suggestions for future research were offered.
2016-06-01
smartphone or tablet computer platforms, including both Google Android™ and Apple iOS based devices. Recruiting for the pilot study was very...framework design.. 15. SUBJECT TERMS PTSD, post-traumatic stress disorder, mobile health, self-help, iOS , Android, mindfulness, relaxation... study and subsequent randomized controlled trial (RCT) with post-deployed personnel; and (5) adapting the developed system for several popular
Trambert, Renee; Kowalski, Mildred Ortu; Wu, Betty; Mehta, Nimisha; Friedman, Paul
2017-10-01
Aromatherapy has been used to reduce anxiety in a variety of settings, but usefulness associated with breast biopsies has not been documented. This study was conducted in women undergoing image-guided breast biopsy. We explored the use of two different aromatherapy scents, compared to placebo, aimed at reducing anxiety with the intent of generating new knowledge. This was a randomized, placebo-controlled study of two different types of external aromatherapy tabs (lavender-sandalwood and orange-peppermint) compared with a matched placebo-control delivery system. Anxiety was self-reported before and after undergoing a breast biopsy using the Spielberger State Anxiety Inventory Scale. Eighty-seven women participated in this study. There was a statistically significant reduction in self-reported anxiety with the use of the lavender-sandalwood aromatherapy tab compared with the placebo group (p = .032). Aromatherapy tabs reduced anxiety during image-guided breast biopsy. The completion of the biopsy provided some relief from anxiety in all groups. The use of aromatherapy tabs offers an evidence-based nursing intervention to improve adaptation and reduce anxiety for women undergoing breast biopsy. Lavender-sandalwood aromatherapy reduced anxiety and promoted adaptation more than orange-peppermint aromatherapy or placebo. © 2017 Sigma Theta Tau International.
NASA Astrophysics Data System (ADS)
Zheng, Lianqing; Yang, Wei
2008-07-01
Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.
Random SU(2) invariant tensors
NASA Astrophysics Data System (ADS)
Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei
2018-04-01
SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n = 4. In this paper, we show that for n > 4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.
Parallel hyperspectral image reconstruction using random projections
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.
2016-10-01
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
Density scaling for multiplets
NASA Astrophysics Data System (ADS)
Nagy, Á.
2011-02-01
Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.
Primary decomposition of zero-dimensional ideals over finite fields
NASA Astrophysics Data System (ADS)
Gao, Shuhong; Wan, Daqing; Wang, Mingsheng
2009-03-01
A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.
Switching LPV Control for High Performance Tactical Aircraft
NASA Technical Reports Server (NTRS)
Lu, Bei; Wu, Fen; Kim, SungWan
2004-01-01
This paper examines a switching Linear Parameter-Varying (LPV) control approach to determine if it is practical to use for flight control designs within a wide angle of attack region. The approach is based on multiple parameter-dependent Lyapunov functions. The full parameter space is partitioned into overlapping subspaces and a family of LPV controllers are designed, each suitable for a specific parameter subspace. The hysteresis switching logic is used to accomplish the transition among different parameter subspaces. The proposed switching LPV control scheme is applied to an F-16 aircraft model with different actuator dynamics in low and high angle of attack regions. The nonlinear simulation results show that the aircraft performs well when switching among different angle of attack regions.
Predictors of depressive symptoms among Hispanic women in South Florida.
Vermeesch, Amber L; Gonzalez-Guarda, Rosa M; Hall, Rosemary; McCabe, Brian E; Cianelli, Rosina; Peragallo, Nilda P
2013-11-01
U.S. Hispanics, especially women, experience a disproportionate amount of disease burden for depression. This disparity among Hispanic women necessitates examination of factors associated with depression. The objective of this study was to use an adaptation of the Stress Process Model to test whether self-esteem mediated the relationship between Hispanic stress and depressive symptoms. Data for this secondary analysis were from a previous randomized-control HIV prevention trial. Participants were 548 Hispanic women (19-52 years). Data collection measures included the Center for Epidemiological Studies-Depression Scale, Rosenberg Self-Esteem Scale, and Hispanic Stress Scale. The bootstrap method in Mplus 6 was used to test mediation. Results indicated that self-esteem was inversely related to depression, and Hispanic stress was found to be positively related to depression. Self-esteem partially mediated the relationship between stress and depression. Strategies to improve/maintain self-esteem should be considered in future interventions for Hispanic women with depression.
Strauman, Timothy J; Eddington, Kari M
2017-02-01
Self-regulation models of psychopathology provide a theory-based, empirically supported framework for developing psychotherapeutic interventions that complement and extend current cognitive-behavioral models. However, many clinicians are only minimally familiar with the psychology of self-regulation. The aim of the present manuscript is twofold. First, we provide an overview of self-regulation as a motivational process essential to well-being and introduce two related theories of self-regulation which have been applied to depression. Second, we describe how self-regulatory concepts and processes from those two theories have been translated into psychosocial interventions, focusing specifically on self-system therapy (SST), a brief structured treatment for depression that targets personal goal pursuit. Two randomized controlled trials have shown that SST is superior to cognitive therapy for depressed clients with specific self-regulatory deficits, and both studies found evidence that SST works in part by restoring adaptive self-regulation. Self-regulation-based psychotherapeutic approaches to depression hold significant promise for enhancing treatment efficacy and ultimately may provide an individualizable framework for treatment planning.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian; Haller, George
2018-06-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
Analyzing Lie symmetry and constructing conservation laws for time-fractional Benny-Lin equation
NASA Astrophysics Data System (ADS)
Rashidi, Saeede; Hejazi, S. Reza
This paper investigates the invariance properties of the time fractional Benny-Lin equation with Riemann-Liouville and Caputo derivatives. This equation can be reduced to the Kawahara equation, fifth-order Kdv equation, the Kuramoto-Sivashinsky equation and Navier-Stokes equation. By using the Lie group analysis method of fractional differential equations (FDEs), we derive Lie symmetries for the Benny-Lin equation. Conservation laws for this equation are obtained with the aid of the concept of nonlinear self-adjointness and the fractional generalization of the Noether’s operators. Furthermore, by means of the invariant subspace method, exact solutions of the equation are also constructed.
Recovering full coherence in a qubit by measuring half of its environment
NASA Astrophysics Data System (ADS)
Miatto, Filippo M.; Piché, Kevin; Brougham, Thomas; Boyd, Robert W.
2015-12-01
When a quantum system interacts with its environment it may incur in decoherence. Quantum erasure makes it possible to restore coherence in a system by gaining information about its environment, but measuring the whole of it may be prohibitive: Realistically, one might be forced to address only an accessible subspace and neglect the rest. In such a case, under what conditions will quantum erasure still be effective? In this work we compute analytically the largest recoverable coherence of a random qubit plus environment state and we show that it approaches 100% with overwhelmingly high probability as long as the dimension of the accessible subspace of the environment is larger than √{D }, where D is the dimension of the whole environment. Additionally, we find a sharp transition between a linear behavior and a power-law behavior as soon as the dimension of the inaccessible environment exceeds the dimension of the accessible one. Our results imply that the typical states of a qubit plus environment system admit a measurement spanning only about √{D } degrees of freedom, any outcome of which projects the qubit on a maximally coherent state. This suggests, for instance, that in the dynamics of open quantum systems, if the interactions are known, it would in principle be possible to gain sufficient information and restore coherence in a qubit by dealing with a fraction of the physical resources.
Sexual self-esteem in mothers of normal and mentally-retarded children.
Tavakolizadeh, Jahanshir; Amiri, Mostafa; Nejad, Fahimeh Rastgoo
2017-06-01
Sexual self-esteem is negatively influenced by the stressful experiences in lifetime. This study compared the sexual self-esteem and its components in mothers with normal and mentally-retarded children in Qaen city, in 2014. A total of 120 mothers were selected and assigned into two groups of 60 samples based on convenient sampling method and randomized multiple stage sampling. Both groups completed sexual self-esteem questionnaire. The data were analyzed employing t-test through SPSS software version15. The results showed that the rate of sexual self-esteem in mothers of mentally-retarded children decreased significantly compared with that of mothers with normal children (p<0.05). Moreover, the mean scores of all components of sexual self-esteem including skill and experience, attractiveness, control, moral judgment, and adaptiveness in mothers of mentally-retarded children were significantly less than those of mothers with normal children (p <0.05). Therefore, it is recommended that self-esteem, especially the sexual one, be taught to mothers of mentally-retarded children by specialists.
2011-01-01
Background Suicide is a major public health problem worldwide. In the UK suicide is the second most common cause of death in people aged 15-24 years. Self harm is one of the commonest reasons for medical admission in the UK. In the year following a suicide attempt the risk of a repeat attempt or death by suicide may be up to 100 times greater than in people who have never attempted suicide. Research evidence shows increased risk of suicide and attempted suicide among British South Asian women. There are concerns about the current service provision and its appropriateness for this community due to the low numbers that get involved with the services. Both problem solving and interpersonal forms of psychotherapy are beneficial in the treatment of patients who self harm and could potentially be helpful in this ethnic group. The paper describes the trial protocol of adapting and evaluating a culturally appropriate psychological treatment for the adult British South Asian women who self harm. Methods We plan to test a culturally adapted Problem Solving Therapy (C- MAP) in British South Asian women who self harm. Eight sessions of problem solving each lasting approximately 50 minutes will be delivered over 3 months. The intervention will be assessed using a prospective rater blind randomized controlled design comparing with treatment as usual (TAU). Outcome assessments will be carried out at 3 and 6 months. A sub group of the participants will be invited for qualitative interviews. Discussion This study will test the feasibility and acceptability of the C- MAP in British South Asian women. We will be informed on whether a culturally adapted brief psychological intervention compared with treatment as usual for self-harm results in decreased hopelessness and suicidal ideation. This will also enable us to collect necessary information on recruitment, effect size, the optimal delivery method and acceptability of the intervention in preparation for a definitive RCT using repetition of self harm and cost effectiveness as primary outcome measures. Trial Registration Current Controlled Trials 08/H1013/6 PMID:21693027
Husain, Nusrat; Chaudhry, Nasim; Durairaj, Steevart V; Chaudhry, Imran; Khan, Sarah; Husain, Meher; Nagaraj, Diwaker; Naeem, Farooq; Waheed, Waquas
2011-06-21
Suicide is a major public health problem worldwide. In the UK suicide is the second most common cause of death in people aged 15-24 years. Self harm is one of the commonest reasons for medical admission in the UK. In the year following a suicide attempt the risk of a repeat attempt or death by suicide may be up to 100 times greater than in people who have never attempted suicide. Research evidence shows increased risk of suicide and attempted suicide among British South Asian women. There are concerns about the current service provision and its appropriateness for this community due to the low numbers that get involved with the services. Both problem solving and interpersonal forms of psychotherapy are beneficial in the treatment of patients who self harm and could potentially be helpful in this ethnic group.The paper describes the trial protocol of adapting and evaluating a culturally appropriate psychological treatment for the adult British South Asian women who self harm. We plan to test a culturally adapted Problem Solving Therapy (C- MAP) in British South Asian women who self harm. Eight sessions of problem solving each lasting approximately 50 minutes will be delivered over 3 months. The intervention will be assessed using a prospective rater blind randomized controlled design comparing with treatment as usual (TAU). Outcome assessments will be carried out at 3 and 6 months. A sub group of the participants will be invited for qualitative interviews. This study will test the feasibility and acceptability of the C- MAP in British South Asian women. We will be informed on whether a culturally adapted brief psychological intervention compared with treatment as usual for self-harm results in decreased hopelessness and suicidal ideation. This will also enable us to collect necessary information on recruitment, effect size, the optimal delivery method and acceptability of the intervention in preparation for a definitive RCT using repetition of self harm and cost effectiveness as primary outcome measures. Current Controlled Trials 08/H1013/6.
Land Cover Changes between 1974 and 2008 in Ulaanbaatar, Mongolia
NASA Astrophysics Data System (ADS)
Bagan, H.; Kinoshita, T.; Yamagata, Y.
2009-12-01
In the past 35 years, a combination of human actions and natural causes has led to a significant decline in land quality in Ulaanbaatar, the capital city of Mongolia. Human causes include changes in conventional livestock husbandry, overgrazing, and exploitation for traditional uses. Natural causes include a harsh, dry climate, short growing seasons, and thin soils. Since 1995, many herders left the countryside to come to the city in search of new opportunities, the Ger areas (wooden houses and Ger) have expended, resulting in urban sprawl. Since urbanization usually advance in an uncontrolled or unorganized way in Mongolia, they have destructive effects on the environment, particularly on basic ecosystems, wildlife habitat, and pollution of natural resources (e.g. air and water). Land use and land cover changes occurred in the region are investigated using satellite images acquired in 1974 (Landsat MSS), 1990 (Landsat TM), 2000 (ASTER), 2006 (IKONOS), and 2008 (ALOS). Pre-processing of all data included orthorectification and registration to precisely geolocated imagery. In the detection of changes, classification approaches were employed using a self-organizing map (SOM) neural network classifier (Fig. 1a) and new developed subspace classification method (Fig. 1b). From the time-series classified remote sensing images, we extract the land cover and land cover temporal changes from 1974 to 2008. The results show some important findings regarding the size and nature of the change occurred in the study area. A significant amount of steppe and forest lands have been destroyed or replaced by residential areas; as a result, the total area of urban region doubled in the 35-year period with a higher urbanization rate between 2000 and 2008. Key words: Environment; Land Cover; Urban; Change detection; Classification. References Chinbat,B., Bayantur,M., & Amarsaikhan.D. (2006). Investigation of the internal structure changes of ulaanbaatar city using RS and GIS. ISPRS Commission VII Mid-term Symposium “Remote Sensing: From Pixels to Processes”, Enschede, the Netherlands, 8-11 May 2006. 511-516. Bagan, H., Wang, Q., Watanabe, M., Karneyarna, S., & Bao, Y. (2008). Land-cover classification using ASTER multi-band combinations based on wavelet fusion and SOM neural network. Photogrammetric Engineering and Remote Sensing, 74, 333-342. Bagan, H., Yasuoka, Y., Endo, T., Wang, X., & Feng, Z. (2008). Classification of airborne hyperspectral data based on the average learning subspace method. IEEE Geoscience and Remote Sensing Letters, 5, 368-372. Figure 1. The self-organizing map (SOM) neural network classifier (a) and the subspace classification method (b).
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
NASA Astrophysics Data System (ADS)
Pavluchenko, Sergey A.; Toporensky, Alexey
2018-05-01
In this paper we address two important issues which could affect reaching the exponential and Kasner asymptotes in Einstein-Gauss-Bonnet cosmologies—spatial curvature and anisotropy in both three- and extra-dimensional subspaces. In the first part of the paper we consider the cosmological evolution of spaces that are the product of two isotropic and spatially curved subspaces. It is demonstrated that the dynamics in D=2 (the number of extra dimensions) and D ≥ 3 is different. It was already known that for the Λ -term case there is a regime with "stabilization" of extra dimensions, where the expansion rate of the three-dimensional subspace as well as the scale factor (the "size") associated with extra dimensions reaches a constant value. This regime is achieved if the curvature of the extra dimensions is negative. We demonstrate that it takes place only if the number of extra dimensions is D ≥ 3. In the second part of the paper we study the influence of the initial anisotropy. Our study reveals that the transition from Gauss-Bonnet Kasner regime to anisotropic exponential expansion (with three expanding and contracting extra dimensions) is stable with respect to breaking the symmetry within both three- and extra-dimensional subspaces. However, the details of the dynamics in D=2 and D ≥ 3 are different. Combining the two described effects allows us to construct a scenario in D ≥ 3, where isotropization of outer and inner subspaces is reached dynamically from rather general anisotropic initial conditions.
Wolf, Antje; Kirschner, Karl N
2013-02-01
With improvements in computer speed and algorithm efficiency, MD simulations are sampling larger amounts of molecular and biomolecular conformations. Being able to qualitatively and quantitatively sift these conformations into meaningful groups is a difficult and important task, especially when considering the structure-activity paradigm. Here we present a study that combines two popular techniques, principal component (PC) analysis and clustering, for revealing major conformational changes that occur in molecular dynamics (MD) simulations. Specifically, we explored how clustering different PC subspaces effects the resulting clusters versus clustering the complete trajectory data. As a case example, we used the trajectory data from an explicitly solvated simulation of a bacteria's L11·23S ribosomal subdomain, which is a target of thiopeptide antibiotics. Clustering was performed, using K-means and average-linkage algorithms, on data involving the first two to the first five PC subspace dimensions. For the average-linkage algorithm we found that data-point membership, cluster shape, and cluster size depended on the selected PC subspace data. In contrast, K-means provided very consistent results regardless of the selected subspace. Since we present results on a single model system, generalization concerning the clustering of different PC subspaces of other molecular systems is currently premature. However, our hope is that this study illustrates a) the complexities in selecting the appropriate clustering algorithm, b) the complexities in interpreting and validating their results, and c) by combining PC analysis with subsequent clustering valuable dynamic and conformational information can be obtained.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Brodbeck, Jeannette; Berger, Thomas; Znoj, Hans Joerg
2017-01-13
Marital bereavement and separation or divorce are among the most stressful critical life events in later life. These events require a dissolution of social and emotional ties, adjustments in daily routine and changes in identity and perspectives for the future. After a normative grief or distress reaction, most individuals cope well with the loss. However, some develop a prolonged grief reaction. Internet-based self-help interventions have proved beneficial for a broad range of disorders, including complicated grief. Based on the task model and the dual-process model of coping with bereavement, we developed a guided internet-based self-help intervention for individuals who experienced marital bereavement, separation or divorce at least 6 months prior to enrolment. The intervention consists of 10 text-based self-help sessions and one supportive email a week. The primary purpose of this study is the evaluation of the feasibility and efficacy of the intervention compared with a waiting control group. The secondary purpose is to compare the effects in bereaved and separated participants. Furthermore, we aim to analyze other predictors, moderators and mediators of the outcome, such as age, psychological distress and intensity of use of the intervention. The design is a randomized controlled trial with a waiting control condition of 12 weeks and a 24-weeks follow-up. At least 72 widowed or separated participants will be recruited via our study website and internet forums. Primary outcomes are reductions in grief symptoms, depression and psychological distress. Secondary outcome measures are related to loneliness, satisfaction with life, embitterment and the sessions. The trial will provide insights into the acceptance and efficacy of internet-based interventions among adults experiencing grief symptoms, psychological distress and adaptation problems in daily life after spousal bereavement, separation or divorce. Findings will add to existing knowledge by (1) evaluating an internet-based intervention specifically designed for spousal bereavement and its consequences; (2) testing whether this intervention is equally effective for individuals after separation or divorce; and (3) suggesting adaptations to improve the efficacy of the intervention, selective indication and adaptations for different needs. ClinicalTrials.gov, NCT02900534 . Registered on 1 September 2016.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
On spectral synthesis on element-wise compact Abelian groups
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2015-08-01
Let G be an arbitrary locally compact Abelian group and let C(G) be the space of all continuous complex-valued functions on G. A closed linear subspace \\mathscr H\\subseteq C(G) is referred to as an invariant subspace if it is invariant with respect to the shifts τ_y\\colon f(x)\\mapsto f(xy), y\\in G. By definition, an invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis if \\mathscr H coincides with the closure in C(G) of the linear span of all characters of G belonging to \\mathscr H. We say that strict spectral synthesis holds in the space C(G) on G if every invariant subspace \\mathscr H\\subseteq C(G) admits strict spectral synthesis. An element x of a topological group G is said to be compact if x is contained in some compact subgroup of G. A group G is said to be element-wise compact if all elements of G are compact. The main result of the paper is the proof of the fact that strict spectral synthesis holds in C(G) for a locally compact Abelian group G if and only if G is element-wise compact. Bibliography: 14 titles.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network
NASA Astrophysics Data System (ADS)
Funabashi, Masatoshi
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Wavelet Analyses of F/A-18 Aeroelastic and Aeroservoelastic Flight Test Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1997-01-01
Time-frequency signal representations combined with subspace identification methods were used to analyze aeroelastic flight data from the F/A-18 Systems Research Aircraft (SRA) and aeroservoelastic data from the F/A-18 High Alpha Research Vehicle (HARV). The F/A-18 SRA data were produced from a wingtip excitation system that generated linear frequency chirps and logarithmic sweeps. HARV data were acquired from digital Schroeder-phased and sinc pulse excitation signals to actuator commands. Nondilated continuous Morlet wavelets implemented as a filter bank were chosen for the time-frequency analysis to eliminate phase distortion as it occurs with sliding window discrete Fourier transform techniques. Wavelet coefficients were filtered to reduce effects of noise and nonlinear distortions identically in all inputs and outputs. Cleaned reconstructed time domain signals were used to compute improved transfer functions. Time and frequency domain subspace identification methods were applied to enhanced reconstructed time domain data and improved transfer functions, respectively. Time domain subspace performed poorly, even with the enhanced data, compared with frequency domain techniques. A frequency domain subspace method is shown to produce better results with the data processed using the Morlet time-frequency technique.
Rogers, L J; Douglas, R R
1984-02-01
In this paper (the second in a series), we consider a (generic) pair of datasets, which have been analyzed by the techniques of the previous paper. Thus, their "stable subspaces" have been established by comparative factor analysis. The pair of datasets must satisfy two confirmable conditions. The first is the "Inclusion Condition," which requires that the stable subspace of one of the datasets is nearly identical to a subspace of the other dataset's stable subspace. On the basis of that, we have assumed the pair to have similar generating signals, with stochastically independent generators. The second verifiable condition is that the (presumed same) generating signals have distinct ratios of variances for the two datasets. Under these conditions a small elaboration of some elementary linear algebra reduces the rotation problem to several eigenvalue-eigenvector problems. Finally, we emphasize that an analysis of each dataset by the method of Douglas and Rogers (1983) is an essential prerequisite for the useful application of the techniques in this paper. Nonempirical methods of estimating the number of factors simply will not suffice, as confirmed by simulations reported in the previous paper.
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...
2017-10-16
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Slope angle estimation method based on sparse subspace clustering for probe safe landing
NASA Astrophysics Data System (ADS)
Li, Haibo; Cao, Yunfeng; Ding, Meng; Zhuang, Likui
2018-06-01
To avoid planetary probes landing on steep slopes where they may slip or tip over, a new method of slope angle estimation based on sparse subspace clustering is proposed to improve accuracy. First, a coordinate system is defined and established to describe the measured data of light detection and ranging (LIDAR). Second, this data is processed and expressed with a sparse representation. Third, on this basis, the data is made to cluster to determine which subspace it belongs to. Fourth, eliminating outliers in subspace, the correct data points are used for the fitting planes. Finally, the vectors normal to the planes are obtained using the plane model, and the angle between the normal vectors is obtained through calculation. Based on the geometric relationship, this angle is equal in value to the slope angle. The proposed method was tested in a series of experiments. The experimental results show that this method can effectively estimate the slope angle, can overcome the influence of noise and obtain an exact slope angle. Compared with other methods, this method can minimize the measuring errors and further improve the estimation accuracy of the slope angle.
The Role of Item Feedback in Self-Adapted Testing.
ERIC Educational Resources Information Center
Roos, Linda L.; And Others
1997-01-01
The importance of item feedback in self-adapted testing was studied by comparing feedback and no feedback conditions for computerized adaptive tests and self-adapted tests taken by 363 college students. Results indicate that item feedback is not necessary to realize score differences between self-adapted and computerized adaptive testing. (SLD)
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao
2016-11-25
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
A sub-space greedy search method for efficient Bayesian Network inference.
Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing
2011-09-01
Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiao-Tian; Yang, Xiao-Bao; Zhao, Yu-Jun
2017-04-01
We have developed an extended distance matrix approach to study the molecular geometric configuration through spectral decomposition. It is shown that the positions of all atoms in the eigen-space can be specified precisely by their eigen-coordinates, while the refined atomic eigen-subspace projection array adopted in our approach is demonstrated to be a competent invariant in structure comparison. Furthermore, a visual eigen-subspace projection function (EPF) is derived to characterize the surrounding configuration of an atom naturally. A complete set of atomic EPFs constitute an intrinsic representation of molecular conformation, based on which the interatomic EPF distance and intermolecular EPF distance can be reasonably defined. Exemplified with a few cases, the intermolecular EPF distance shows exceptional rationality and efficiency in structure recognition and comparison.
NASA Astrophysics Data System (ADS)
Lewandowski, Jerzy; Lin, Chun-Yen
2017-03-01
We explicitly solved the anomaly-free quantum constraints proposed by Tomlin and Varadarajan for the weak Euclidean model of canonical loop quantum gravity, in a large subspace of the model's kinematic Hilbert space, which is the space of the charge network states. In doing so, we first identified the subspace on which each of the constraints acts convergingly, and then by explicitly evaluating such actions we found the complete set of the solutions in the identified subspace. We showed that the space of solutions consists of two classes of states, with the first class having a property that involves the condition known from the Minkowski theorem on polyhedra, and the second class satisfying a weaker form of the spatial diffeomorphism invariance.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
NASA Astrophysics Data System (ADS)
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-10-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections.
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
Feedback-tuned, noise resilient gates for encoded spin qubits
NASA Astrophysics Data System (ADS)
Bluhm, Hendrik
Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.
Louis, Preeti Tabitha; Kumar, Navin
2016-01-01
Background: Perfectionism is a multifaceted concept. It had both advantages and disadvantages. Perfectionistic traits have been associated with leadership and very intellectual people. The present study is an attempt to understand if engineering students possess perfectionistic orientation and whether it influences self-efficacy, social connectedness, and achievement motivation. Materials and Methods: The present study adopts a random sampling design to evaluate the presence of perfectionism as a personality trait among undergraduate engineering students (N = 320). Standardized inventories such as Almost Perfect Scale-Revised were administered first to identify perfectionists and second to differentiate the adaptive from the maladaptive perfectionists. Scheduled interviews were conducted with students to obtain information regarding birth order and family functioning. Results: Findings from the study reveal that there were a significant number of maladaptive perfectionists and that they experienced higher levels of personal and societal demands leading to a negative emotional well-being in comparison to the adaptive perfectionists. We also observed that first-born children were more likely to display a perfectionistic self-presentation and from scheduled interviews, we understood that paternal influences were stronger when it came to decision-making and display of conscientiousness. Conclusion: The study draws on important implications for helping students to understand perfectionism and to respond to demands of the family and societal subsystems in a positive and an adaptive manner. PMID:27833225
A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters
Wang, Zhihao; Yi, Jing
2016-01-01
For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291
Louis, Preeti Tabitha; Kumar, Navin
2016-01-01
Perfectionism is a multifaceted concept. It had both advantages and disadvantages. Perfectionistic traits have been associated with leadership and very intellectual people. The present study is an attempt to understand if engineering students possess perfectionistic orientation and whether it influences self-efficacy, social connectedness, and achievement motivation. The present study adopts a random sampling design to evaluate the presence of perfectionism as a personality trait among undergraduate engineering students ( N = 320). Standardized inventories such as Almost Perfect Scale-Revised were administered first to identify perfectionists and second to differentiate the adaptive from the maladaptive perfectionists. Scheduled interviews were conducted with students to obtain information regarding birth order and family functioning. Findings from the study reveal that there were a significant number of maladaptive perfectionists and that they experienced higher levels of personal and societal demands leading to a negative emotional well-being in comparison to the adaptive perfectionists. We also observed that first-born children were more likely to display a perfectionistic self-presentation and from scheduled interviews, we understood that paternal influences were stronger when it came to decision-making and display of conscientiousness. The study draws on important implications for helping students to understand perfectionism and to respond to demands of the family and societal subsystems in a positive and an adaptive manner.
Wingood, Gina M; DiClemente, Ralph J; Villamizar, Kira; Er, Deja L; DeVarona, Martina; Taveras, Janelle; Painter, Thomas M; Lang, Delia L; Hardin, James W; Ullah, Evelyn; Stallworth, JoAna; Purcell, David W; Jean, Reynald
2011-12-01
We developed and assessed AMIGAS (Amigas, Mujeres Latinas, Inform andonos, Gui andonos, y Apoy andonos contra el SIDA [friends, Latina women, informing each other, guiding each other, and supporting each other against AIDS]), a culturally congruent HIV prevention intervention for Latina women adapted from SiSTA (Sistas Informing Sistas about Topics on AIDS), an intervention for African American women. We recruited 252 Latina women aged 18 to 35 years in Miami, Florida, in 2008 to 2009 and randomized them to the 4-session AMIGAS intervention or a 1-session health intervention. Participants completed audio computer-assisted self-interviews at baseline and follow-up. Over the 6-month follow-up, AMIGAS participants reported more consistent condom use during the past 90 (adjusted odds ratio [AOR] = 4.81; P < .001) and 30 (AOR = 3.14; P < .001) days and at last sexual encounter (AOR = 2.76; P < .001), and a higher mean percentage condom use during the past 90 (relative change = 55.7%; P < .001) and 30 (relative change = 43.8%; P < .001) days than did comparison participants. AMIGAS participants reported fewer traditional views of gender roles (P = .008), greater self-efficacy for negotiating safer sex (P < .001), greater feelings of power in relationships (P = .02), greater self-efficacy for using condoms (P < .001), and greater HIV knowledge (P = .009) and perceived fewer barriers to using condoms (P < .001). Our results support the efficacy of this linguistically and culturally adapted HIV intervention among ethnically diverse, predominantly foreign-born Latina women.
DiClemente, Ralph J.; Villamizar, Kira; Er, Deja L.; DeVarona, Martina; Taveras, Janelle; Painter, Thomas M.; Lang, Delia L.; Hardin, James W.; Ullah, Evelyn; Stallworth, JoAna; Purcell, David W.; Jean, Reynald
2011-01-01
Objectives. We developed and assessed AMIGAS (Amigas, Mujeres Latinas, Inform andonos, Gui andonos, y Apoy andonos contra el SIDA [friends, Latina women, informing each other, guiding each other, and supporting each other against AIDS]), a culturally congruent HIV prevention intervention for Latina women adapted from SiSTA (Sistas Informing Sistas about Topics on AIDS), an intervention for African American women. Methods. We recruited 252 Latina women aged 18 to 35 years in Miami, Florida, in 2008 to 2009 and randomized them to the 4-session AMIGAS intervention or a 1-session health intervention. Participants completed audio computer-assisted self-interviews at baseline and follow-up. Results. Over the 6-month follow-up, AMIGAS participants reported more consistent condom use during the past 90 (adjusted odds ratio [AOR] = 4.81; P < .001) and 30 (AOR = 3.14; P < .001) days and at last sexual encounter (AOR = 2.76; P < .001), and a higher mean percentage condom use during the past 90 (relative change = 55.7%; P < .001) and 30 (relative change = 43.8%; P < .001) days than did comparison participants. AMIGAS participants reported fewer traditional views of gender roles (P = .008), greater self-efficacy for negotiating safer sex (P < .001), greater feelings of power in relationships (P = .02), greater self-efficacy for using condoms (P < .001), and greater HIV knowledge (P = .009) and perceived fewer barriers to using condoms (P < .001). Conclusions. Our results support the efficacy of this linguistically and culturally adapted HIV intervention among ethnically diverse, predominantly foreign-born Latina women. PMID:22021297
Adaptive leadership curriculum for Indian paramedic trainees.
Mantha, Aditya; Coggins, Nathaniel L; Mahadevan, Aditya; Strehlow, Rebecca N; Strehlow, Matthew C; Mahadevan, S V
2016-12-01
Paramedic trainees in developing countries face complex and chaotic clinical environments that demand effective leadership, communication, and teamwork. Providers must rely on non-technical skills (NTS) to manage bystanders and attendees, collaborate with other emergency professionals, and safely and appropriately treat patients. The authors designed a NTS curriculum for paramedic trainees focused on adaptive leadership, teamwork, and communication skills critical to the Indian prehospital environment. Forty paramedic trainees in the first academic year of the 2-year Advanced Post-Graduate Degree in Emergency Care (EMT-paramedic equivalent) program at the GVK-Emergency Management and Research Institute campus in Hyderabad, India, participated in the 6-day leadership course. Trainees completed self-assessments and delivered two brief video-recorded presentations before and after completion of the curriculum. Independent blinded observers scored the pre- and post-intervention presentations delivered by 10 randomly selected paramedic trainees. The third-party judges reported significant improvement in both confidence (25 %, p < 0.01) and body language of paramedic trainees (13 %, p < 0.04). Self-reported competency surveys indicated significant increases in leadership (2.6 vs. 4.6, p < 0.001, d = 1.8), public speaking (2.9 vs. 4.6, p < 0.001, d = 1.4), self-reflection (2.7 vs. 4.6, p < 0.001, d = 1.6), and self-confidence (3.0 vs. 4.8, p < 0.001, d = 1.5). Participants in a 1-week leadership curriculum for prehospital providers demonstrated significant improvement in self-reported NTS commonly required of paramedics in the field. The authors recommend integrating focused NTS development curriculum into Indian paramedic education and further evaluation of the long term impacts of this adaptive leadership training.
Poslawsky, Irina E; Naber, Fabiënne Ba; Bakermans-Kranenburg, Marian J; van Daalen, Emma; van Engeland, Herman; van IJzendoorn, Marinus H
2015-07-01
In a randomized controlled trial, we evaluated the early intervention program Video-feedback Intervention to promote Positive Parenting adapted to Autism (VIPP-AUTI) with 78 primary caregivers and their child (16-61 months) with Autism Spectrum Disorder. VIPP-AUTI is a brief attachment-based intervention program, focusing on improving parent-child interaction and reducing the child's individual Autism Spectrum Disorder-related symptomatology in five home visits. VIPP-AUTI, as compared with usual care, demonstrated efficacy in reducing parental intrusiveness. Moreover, parents who received VIPP-AUTI showed increased feelings of self-efficacy in child rearing. No significant group differences were found on other aspects of parent-child interaction or on child play behavior. At 3-months follow-up, intervention effects were found on child-initiated joint attention skills, not mediated by intervention effects on parenting. Implementation of VIPP-AUTI in clinical practice is facilitated by the use of a detailed manual and a relatively brief training of interveners. © The Author(s) 2014.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
Latent profile analysis of sixth graders based on teacher ratings: Association with school dropout.
Orpinas, Pamela; Raczynski, Katherine; Peters, Jaclyn Wetherington; Colman, Laura; Bandalos, Deborah
2015-12-01
The goal of this study was to identify meaningful groups of sixth graders with common characteristics based on teacher ratings of assets and maladaptive behaviors, describe dropout rates for each group, and examine the validity of these groups using students' self-reports. The sample consisted of racially diverse students (n = 675) attending sixth grade in public schools in Northeast Georgia. The majority of the sample was randomly selected; a smaller group was identified by teachers as high risk for aggression. Based on teacher ratings of externalizing behaviors, internalizing problems, academic skills, leadership, and social assets, latent profile analysis yielded 7 classes that can be displayed along a continuum: Well-Adapted, Average, Average-Social Skills Deficit, Internalizing, Externalizing, Disruptive Behavior with School Problems, and Severe Problems. Dropout rate was lowest for the Well-adapted class (4%) and highest for the Severe Problems class (58%). However, students in the Average-Social Skills Deficit class did not follow the continuum, with a large proportion of students who abandoned high school (29%). The proportion of students identified by teachers as high in aggression consistently increased across the continuum from none in the Well-Adapted class to 84% in the Severe Problems class. Students' self-reports were generally consistent with the latent profile classes. Students in the Well-Adapted class reported low aggression, drug use, and delinquency, and high life satisfaction; self-reports went in the opposite direction for the Disruptive Behaviors with School Problems class. Results highlight the importance of early interventions to improve academic performance, reduce externalizing behaviors, and enhance social assets. (c) 2015 APA, all rights reserved).
Constraint-Free Theories of Gravitation
NASA Technical Reports Server (NTRS)
Estabrook, Frank B.; Robinson, R. Steve; Wahlquist, Hugo D.
1998-01-01
Lovelock actions (more precisely, extended Gauss-Bonnet forms) when varied as Cartan forms on subspaces of higher dimensional flat Riemannian manifolds, generate well set, causal exterior differential systems. In particular, the Einstein- Hilbert action 4-form, varied on a 4 dimensional subspace of E(sub 10) yields a well set generalized theory of gravity having no constraints. Rcci-flat solutions are selected by initial conditions on a bounding 3-space.
Application of Subspace Detection to the 6 November 2011 M5.6 Prague, Oklahoma Aftershock Sequence
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Johnson, C. E.; Aster, R. C.; McNamara, D. E.
2015-12-01
Subspace detection is a powerful tool for the identification of small seismic events. Subspace detectors improve upon single-event matched filtering techniques by using multiple orthogonal waveform templates whose linear combinations characterize a range of observed signals from previously identified earthquakes. Subspace detectors running on multiple stations can significantly increasing the number of locatable events, lowering the catalog's magnitude of completeness and thus providing extraordinary detail on the kinematics of the aftershock process. The 6 November 2011 M5.6 earthquake near Prague, Oklahoma is the largest earthquake instrumentally recorded in Oklahoma history and the largest earthquake resultant from deep wastewater injection. A M4.8 foreshock on 5 November 2011 and the M5.6 mainshock triggered tens of thousands of detectable aftershocks along a 20 km splay of the Wilzetta Fault Zone known as the Meeker-Prague fault. In response to this unprecedented earthquake, 21 temporary seismic stations were deployed surrounding the seismic activity. We utilized a catalog of 767 previously located aftershocks to construct subspace detectors for the 21 temporary and 10 closest permanent seismic stations. Subspace detection identified more than 500,000 new arrival-time observations, which associated into more than 20,000 locatable earthquakes. The associated earthquakes were relocated using the Bayesloc multiple-event locator, resulting in ~7,000 earthquakes with hypocentral uncertainties of less than 500 m. The relocated seismicity provides unique insight into the spatio-temporal evolution of the aftershock sequence along the Wilzetta Fault Zone and its associated structures. We find that the crystalline basement and overlying sedimentary Arbuckle formation accommodate the majority of aftershocks. While we observe aftershocks along the entire 20 km length of the Meeker-Prague fault, the vast majority of earthquakes were confined to a 9 km wide by 9 km deep surface striking N54°E and dipping 83° to the northwest near the junction of the splay with the main Wilzetta fault structure. Relocated seismicity shows off-fault stress-related interaction to distances of 10 km or more from the mainshock, including clustered seismicity to the northwest and southeast of the mainshock.
Tang, Jie; Yang, Wei; Ahmed, Niman Isse; Ma, Ying; Liu, Hui-Yan; Wang, Jia-Ji; Wang, Pei-Xi; Du, Yu-Kai; Yu, Yi-Zhen
2016-03-01
Stressful life events have been implicated in the etiology of kinds of psychopathology related to nonsuicidal self-injury (NSSI); however, few studies have examined the association between NSSI and stressful life events directly in Chinese school adolescents. In this study, we aim to estimate the prevalence rate of NSSI and examine its association with stressful life events in Southern Chinese adolescents. A total sample of 4405 students with age ranged from 10 to 22 years was randomly selected from 12 schools in 3 cities of Guangdong Province, China. NSSI, stressful life events, self-esteem, emotional management, and coping methods were measured by structured questionnaires. Multinomial logistic regression was used to examine the association of NSSI with stressful life events. Results showed the 1 year self-reported NSSI was 29.2%, with 22.6% engaged in "minor" NSSI (including hitting self, pulling hair, biting self, inserting objects under nails or skin, picking at a wound) and 6.6% in "moderate/sever" NSSI (including cutting/carving, burning, self-tattooing, scraping, and erasing skin). Self-hitting (15.9%), pulling hair out (10.9%), and self-inserting objects under nails or skin picking areas to dram blood (18.3%) were the most frequent types of NSSI among adolescents. Results also showed that "Minor NSSI" was associated with stressful life events on interpersonal, loss and health adaption, and "moderate/severe NSSI" was associated with life events on interpersonal, health adaption in Southern Chinese adolescents, even after adjusted for sex, age, residence, self-esteem, coping style, and emotional management. Results further suggested stressful life events were significantly associated with less risk of NSSI in those who had good emotional management ability.
Stressful Life Events as a Predictor for Nonsuicidal Self-Injury in Southern Chinese Adolescence
Tang, Jie; Yang, Wei; Ahmed, Niman Isse; Ma, Ying; Liu, Hui-Yan; Wang, Jia-Ji; Wang, Pei-Xi; Du, Yu-Kai; Yu, Yi-Zhen
2016-01-01
Abstract Stressful life events have been implicated in the etiology of kinds of psychopathology related to nonsuicidal self-injury (NSSI); however, few studies have examined the association between NSSI and stressful life events directly in Chinese school adolescents. In this study, we aim to estimate the prevalence rate of NSSI and examine its association with stressful life events in Southern Chinese adolescents. A total sample of 4405 students with age ranged from 10 to 22 years was randomly selected from 12 schools in 3 cities of Guangdong Province, China. NSSI, stressful life events, self-esteem, emotional management, and coping methods were measured by structured questionnaires. Multinomial logistic regression was used to examine the association of NSSI with stressful life events. Results showed the 1 year self-reported NSSI was 29.2%, with 22.6% engaged in “minor” NSSI (including hitting self, pulling hair, biting self, inserting objects under nails or skin, picking at a wound) and 6.6% in “moderate/sever” NSSI (including cutting/carving, burning, self-tattooing, scraping, and erasing skin). Self-hitting (15.9%), pulling hair out (10.9%), and self-inserting objects under nails or skin picking areas to dram blood (18.3%) were the most frequent types of NSSI among adolescents. Results also showed that “Minor NSSI” was associated with stressful life events on interpersonal, loss and health adaption, and “moderate/severe NSSI” was associated with life events on interpersonal, health adaption in Southern Chinese adolescents, even after adjusted for sex, age, residence, self-esteem, coping style, and emotional management. Results further suggested stressful life events were significantly associated with less risk of NSSI in those who had good emotional management ability. PMID:26945351
Kassié, Daouda; Roudot, Anna; Dessay, Nadine; Piermay, Jean-Luc; Salem, Gérard; Fournet, Florence
2017-04-18
Many cities in developing countries experience an unplanned and rapid growth. Several studies have shown that the irregular urbanization and equipment of cities produce different health risks and uneven exposure to specific diseases. Consequently, health surveys within cities should be carried out at the micro-local scale and sampling methods should try to capture this urban diversity. This article describes the methodology used to develop a multi-stage sampling protocol to select a population for a demographic survey that investigates health disparities in the medium-sized city of Bobo-Dioulasso, Burkina Faso. It is based on the characterization of Bobo-Dioulasso city typology by taking into account the city heterogeneity, as determined by analysis of the built environment and of the distribution of urban infrastructures, such as healthcare structures or even water fountains, by photo-interpretation of aerial photographs and satellite images. Principal component analysis and hierarchical ascendant classification were then used to generate the city typology. Five groups of spaces with specific profiles were identified according to a set of variables which could be considered as proxy indicators of health status. Within these five groups, four sub-spaces were randomly selected for the study. We were then able to survey 1045 households in all the selected sub-spaces. The pertinence of this approach is discussed regarding to classical sampling as random walk method for example. This urban space typology allowed to select a population living in areas representative of the uneven urbanization process, and to characterize its health status in regards to several indicators (nutritional status, communicable and non-communicable diseases, and anaemia). Although this method should be validated and compared with more established methods, it appears as an alternative in developing countries where geographic and population data are scarce.
2012-01-01
Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582
Application of Earthquake Subspace Detectors at Kilauea and Mauna Loa Volcanoes, Hawai`i
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Yeck, W.
2016-12-01
Recent studies have demonstrated the capabilities of earthquake subspace detectors for detailed cataloging and tracking of seismicity in a number of regions and settings. We are exploring the application of subspace detectors at the United States Geological Survey's Hawaiian Volcano Observatory (HVO) to analyze seismicity at Kilauea and Mauna Loa volcanoes. Elevated levels of microseismicity and occasional swarms of earthquakes associated with active volcanism here present cataloging challenges due the sheer numbers of earthquakes and an intrinsically low signal-to-noise environment featuring oceanic microseism and volcanic tremor in the ambient seismic background. With high-quality continuous recording of seismic data at HVO, we apply subspace detectors (Harris and Dodge, 2011, Bull. Seismol. Soc. Am., doi: 10.1785/0120100103) during intervals of noteworthy seismicity. Waveform templates are drawn from Magnitude 2 and larger earthquakes within clusters of earthquakes cataloged in the HVO seismic database. At Kilauea, we focus on seismic swarms in the summit caldera region where, despite continuing eruptions from vents in the summit region and in the east rift zone, geodetic measurements reflect a relatively inflated volcanic state. We also focus on seismicity beneath and adjacent to Mauna Loa's summit caldera that appears to be associated with geodetic expressions of gradual volcanic inflation, and where precursory seismicity clustered prior to both Mauna Loa's most recent eruptions in 1975 and 1984. We recover several times more earthquakes with the subspace detectors - down to roughly 2 magnitude units below the templates, based on relative amplitudes - compared to the numbers of cataloged earthquakes. The increased numbers of detected earthquakes in these clusters, and the ability to associate and locate them, allow us to infer details of the spatial and temporal distributions and possible variations in stresses within these key regions of the volcanoes.
Direct lifts of coupled cell networks
NASA Astrophysics Data System (ADS)
Dias, A. P. S.; Moreira, C. S.
2018-04-01
In networks of dynamical systems, there are spaces defined in terms of equalities of cell coordinates which are flow-invariant under any dynamical system that has a form consistent with the given underlying network structure—the network synchrony subspaces. Given a network and one of its synchrony subspaces, any system with a form consistent with the network, restricted to the synchrony subspace, defines a new system which is consistent with a smaller network, called the quotient network of the original network by the synchrony subspace. Moreover, any system associated with the quotient can be interpreted as the restriction to the synchrony subspace of a system associated with the original network. We call the larger network a lift of the smaller network, and a lift can be interpreted as a result of the cellular splitting of the smaller network. In this paper, we address the question of the uniqueness in this lifting process in terms of the networks’ topologies. A lift G of a given network Q is said to be direct when there are no intermediate lifts of Q between them. We provide necessary and sufficient conditions for a lift of a general network to be direct. Our results characterize direct lifts using the subnetworks of all splitting cells of Q and of all split cells of G. We show that G is a direct lift of Q if and only if either the split subnetwork is a direct lift or consists of two copies of the splitting subnetwork. These results are then applied to the class of regular uniform networks and to the special classes of ring networks and acyclic networks. We also illustrate that one of the applications of our results is to the lifting bifurcation problem.
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.
2016-05-01
Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.
Cultural differences in emotion regulation during self-reflection on negative personal experiences.
Tsai, William; Lau, Anna S
2013-01-01
Reflecting on negative personal experiences has implications for mood that may vary as a function of specific domains (e.g., achievement vs. interpersonal) and cultural orientation (e.g., interdependence vs. independence). This study investigated cultural differences in the social-cognitive and affective processes undertaken as Easterners and Westerners reflected on negative interpersonal and performance experiences. One hundred Asian Americans and 92 European-American college students were randomly assigned to one of three conditions: interpersonal rejection, achievement failure, or a control condition. Results revealed that Asian Americans experienced greater distress than European Americans after self-reflecting over a failed interpersonal experience, suggesting cultural sensitivity in the relational domain. Consistent with theoretical predictions, analysis of the social cognitive and affective processes that participants engaged in during self-reflection provided some evidence that self-enhancement may buffer distress for European Americans, while emotion suppression may be adaptive for Asian Americans.
Value self-confrontation as a method to aid in weight loss.
Schwartz, S H; Inbar-Saban, N
1988-03-01
The impact on weight loss of an adaptation of the Rokeach (1973) value self-confrontation method was investigated in a field experiment. This method confronts people who have ranked their own values with information about the value priorities that discriminate between a positive and a negative reference group. A preliminary study revealed that successful weight losers differ from unsuccessful weight losers in valuing "wisdom" more than "happiness." Eighty-seven overweight adults were randomly assigned to one of three conditions: value self-confrontation, group discussion, or non-treatment control. Value self-confrontation subjects lost more weight than the other subjects over 2 months, and this weight loss persisted for an additional year. Changes in value priorities during the first 2 months suggest that weight loss was mediated by an increase in the importance attributed to wisdom relative to happiness. Implications for the theory of value-behavior relations and for practical application in weight loss programs are discussed.
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Liu, Hesen; Zhu, Lin; Pan, Zhuohong; ...
2015-09-14
One of the main drawbacks of the existing oscillation damping controllers that are designed based on offline dynamic models is adaptivity to the power system operating condition. With the increasing availability of wide-area measurements and the rapid development of system identification techniques, it is possible to identify a measurement-based transfer function model online that can be used to tune the oscillation damping controller. Such a model could capture all dominant oscillation modes for adaptive and coordinated oscillation damping control. our paper describes a comprehensive approach to identify a low-order transfer function model of a power system using a multi-input multi-outputmore » (MIMO) autoregressive moving average exogenous (ARMAX) model. This methodology consists of five steps: 1) input selection; 2) output selection; 3) identification trigger; 4) model estimation; and 5) model validation. The proposed method is validated by using ambient data and ring-down data in the 16-machine 68-bus Northeast Power Coordinating Council system. Our results demonstrate that the measurement-based model using MIMO ARMAX can capture all the dominant oscillation modes. Compared with the MIMO subspace state space model, the MIMO ARMAX model has equivalent accuracy but lower order and improved computational efficiency. The proposed model can be applied for adaptive and coordinated oscillation damping control.« less
NASA Technical Reports Server (NTRS)
Metzger, Philip T.
2006-01-01
Ergodicity is proved for granular contact forces. To obtain this proof from first principles, this paper generalizes Boltzmann's stosszahlansatz (molecular chaos) so that it maintains the necessary correlations and symmetries of granular packing ensembles. Then it formally counts granular contact force states and thereby defines the proper analog of Boltzmann's H functional. This functional is used to prove that (essentially) all static granular packings must exist at maximum entropy with respect to their contact forces. Therefore, the propagation of granular contact forces through a packing is a truly ergodic process in the Boltzmannian sense, or better, it is self-ergodic. Self-ergodicity refers to the non-dynamic, internal relationships that exist between the layer-by-layer and column-by-column subspaces contained within the phase space locus of any particular granular packing microstate. The generalized H Theorem also produces a recursion equation that may be solved numerically to obtain the density of single particle states and hence the distribution of granular contact forces corresponding to the condition of self-ergodicity. The predictions of the theory are overwhelmingly validated by comparison to empirical data from discrete element modeling.
Sabbagh, J; Dagher, S; El Osta, N; Souhaid, P
2017-01-01
Objectives. To compare the clinical performances of a self-adhering resin composite and a conventional flowable composite with a self-etch bonding system on permanent molars. The influence of using rubber dam versus cotton roll isolation was also investigated. Materials and Methods. Patients aged between 6 and 12 years and presenting at least two permanent molars in need of small class I restorations were selected. Thirty-four pairs of restorations were randomly placed by the same operator. Fifteen patients were treated under rubber dam and nineteen using cotton rolls isolation and saliva ejector. They were evaluated according to the modified USPHS criteria at baseline, 6 months, and 1 and 2 years by two independent evaluators. Results. All patients attended the two-year recall. For all measured variables, there was no significant difference between rubber dam and cotton after 2 years of restoration with Premise Flowable or Vertise Flow ( p value > 0.05). The percentage of restorations scored alpha decreased significantly over time with Premise Flowable and Vertise Flow for marginal adaptation and surface texture as well as marginal discoloration while it did not vary significantly for color matching. After 2 years, Vertise Flow showed a similar behaviour to the Premise Flowable used with a self-adhesive resin system.
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
NASA Technical Reports Server (NTRS)
Bykhovskiy, E. B.; Smirnov, N. V.
1983-01-01
The Hilbert space L2(omega) of vector functions is studied. A breakdown of L2(omega) into orthogonal subspaces is discussed and the properties of the operators for projection onto these subspaces are investigated from the standpoint of preserving the differential properties of the vectors being projected. Finally, the properties of the operators are examined.
Quantum error suppression with commuting Hamiltonians: two local is too local.
Marvian, Iman; Lidar, Daniel A
2014-12-31
We consider error suppression schemes in which quantum information is encoded into the ground subspace of a Hamiltonian comprising a sum of commuting terms. Since such Hamiltonians are gapped, they are considered natural candidates for protection of quantum information and topological or adiabatic quantum computation. However, we prove that they cannot be used to this end in the two-local case. By making the favorable assumption that the gap is infinite, we show that single-site perturbations can generate a degeneracy splitting in the ground subspace of this type of Hamiltonian which is of the same order as the magnitude of the perturbation, and is independent of the number of interacting sites and their Hilbert space dimensions, just as in the absence of the protecting Hamiltonian. This splitting results in decoherence of the ground subspace, and we demonstrate that for natural noise models the coherence time is proportional to the inverse of the degeneracy splitting. Our proof involves a new version of the no-hiding theorem which shows that quantum information cannot be approximately hidden in the correlations between two quantum systems. The main reason that two-local commuting Hamiltonians cannot be used for quantum error suppression is that their ground subspaces have only short-range (two-body) entanglement.
A Channelization-Based DOA Estimation Method for Wideband Signals
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
Mode Shape Estimation Algorithms Under Ambient Conditions: A Comparative Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosiek, Luke; Zhou, Ning; Pierre, John W.
Abstract—This paper provides a comparative review of five existing ambient electromechanical mode shape estimation algorithms, i.e., the Transfer Function (TF), Spectral, Frequency Domain Decomposition (FDD), Channel Matching, and Subspace Methods. It is also shown that the TF Method is a general approach to estimating mode shape and that the Spectral, FDD, and Channel Matching Methods are actually special cases of it. Additionally, some of the variations of the Subspace Method are reviewed and the Numerical algorithm for Subspace State Space System IDentification (N4SID) is implemented. The five algorithms are then compared using data simulated from a 17-machine model of themore » Western Electricity Coordinating Council (WECC) under ambient conditions with both low and high damping, as well as during the case where ambient data is disrupted by an oscillatory ringdown. The performance of the algorithms is compared using the statistics from Monte Carlo Simulations and results from measured WECC data, and a discussion of the practical issues surrounding their implementation, including cases where power system probing is an option, is provided. The paper concludes with some recommendations as to the appropriate use of the various techniques. Index Terms—Electromechanical mode shape, small-signal stability, phasor measurement units (PMU), system identification, N4SID, subspace.« less
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Zhang, Zhao; Yan, Shuicheng; Zhao, Mingbo
2014-05-01
Latent Low-Rank Representation (LatLRR) delivers robust and promising results for subspace recovery and feature extraction through mining the so-called hidden effects, but the locality of both similar principal and salient features cannot be preserved in the optimizations. To solve this issue for achieving enhanced performance, a boosted version of LatLRR, referred to as Regularized Low-Rank Representation (rLRR), is proposed through explicitly including an appropriate Laplacian regularization that can maximally preserve the similarity among local features. Resembling LatLRR, rLRR decomposes given data matrix from two directions by seeking a pair of low-rank matrices. But the similarities of principal and salient features can be effectively preserved by rLRR. As a result, the correlated features are well grouped and the robustness of representations is also enhanced. Based on the outputted bi-directional low-rank codes by rLRR, an unsupervised subspace learning framework termed Low-rank Similarity Preserving Projections (LSPP) is also derived for feature learning. The supervised extension of LSPP is also discussed for discriminant subspace learning. The validity of rLRR is examined by robust representation and decomposition of real images. Results demonstrated the superiority of our rLRR and LSPP in comparison to other related state-of-the-art algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chambers, Rachel; Tingey, Lauren; Beach, Anna; Barlow, Allison; Rompalo, Anne
2016-04-29
American Indian adults are more likely to experience co-occurring mental health and substance use disorders than adults of other racial/ethnic groups and are disproportionately burdened by the most common sexually transmitted infections, namely chlamydia and gonorrhea. Several behavioral interventions are proven efficacious in lowering risk for sexually transmitted infection in various populations and, if adapted to address barriers experienced by American Indian adults who suffer from mental health and substance use problems, may be useful for dissemination in American Indian communities. The proposed study aims to examine the efficacy of an adapted evidence-based intervention to increase condom use and decrease sexual risk-taking and substance use among American Indian adults living in a reservation-based community in the Southwestern United States. The proposed study is a randomized controlled trial to test the efficacy of an adapted evidence-based intervention compared to a control condition. Participants will be American Indian adults ages 18-49 years old who had a recent episode of binge substance use and/or suicide ideation. Participants will be randomized to the intervention, a two-session risk-reduction counseling intervention or the control condition, optimized standard care. All participants will be offered a self-administered sexually transmitted infection test. Participants will complete assessments at baseline, 3 and 6 months follow-up. The primary outcome measure is condom use at last sex. This is one of the first randomized controlled trials to assess the efficacy of an adapted evidence-based intervention for reducing sexual risk behaviors among AI adults with substance use and mental health problems. If proven successful, there will be an efficacious program for reducing risk behaviors among high-risk adults that can be disseminated in American Indian communities as well as other rural and under-resourced health systems. Clinical Trials NCT02513225.
Effects of a psychosocial intervention on breast self-examination attitudes and behaviors.
Fry, Rachel B; Prentice-Dunn, Steven
2006-04-01
An educational intervention to promote breast self-examinations (BSEs) among young women was tested. In a group (intervention versus control) x time (Session 1 versus Session 2) mixed design, 172 college females were randomly assigned to either an intervention or control condition. Both groups attended two sessions; the second session was 48 hours after the first. The intervention consisted of an essay, lecture, video portraying young survivors of breast cancer, group discussions, self-test and instructions on performing BSEs. The control group had the same format; however, the information was focused on nutrition and exercise. Participants in the intervention group scored higher on rational problem solving and behavioral intentions, suggesting that the intervention increased adaptive responses to breast cancer threat. Conversely, control participants scored significantly higher on maladaptive reactions (e.g. hopelessness, avoidance and fatalistic religiosity) to breast cancer threat. For intervention participants, the initial decline in maladaptive reactions remained stable at 3-month follow-up, but adaptive reactions decreased. Intervention participants had greater confidence in performing BSEs compared with controls but performed them on an irregular basis. Results were interpreted in terms of protection motivation theory, a model that applies the social psychology of persuasion to preventive health.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.
Dharan, Smitha; Nair, Achuthsankar S
2009-01-30
Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.
Appalasamy, Jamuna Rani; Tha, Kyi Kyi; Quek, Kia Fatt; Ramaiah, Siva Seeta; Joseph, Joyce Pauline; Md Zain, Anuar Zaini
2018-06-01
A substantial number of the world's population appears to end with moderate to severe long-term disability after stroke. Persistent uncontrolled stroke risk factor leads to unpredicted recurrent stroke event. The increasing prevalence of stroke across ages in Malaysia has led to the adaptation of medication therapy adherence clinic (MTAC) framework. The stroke care unit has limited patient education resources especially for patients with medication understanding and use self-efficacy. Nevertheless, only a handful of studies have probed into the effectiveness of video narrative at stroke care centers. This is a behavioral randomized controlled trial of patient education intervention with video narratives for patients with stroke lacking medication understanding and use self-efficacy. The study will recruit up to 200 eligible stroke patients at the neurology tertiary outpatient clinic, whereby they will be requested to return for follow-up approximately 3 months once for up to 12 months. Consenting patients will be randomized to either standard patient education care or intervention with video narratives. The researchers will ensure control of potential confounding factors, as well as unbiased treatment review with prescribed medications only obtained onsite. The primary analysis outcomes will reflect the variances in medication understanding and use self-efficacy scores, as well as the associated factors, such as retention of knowledge, belief and perception changes, whereas stroke risk factor control, for example, self-monitoring and quality of life, will be the secondary outcomes. The study should be able to determine if video narrative can induce a positive behavioral change towards stroke risk factor control via enhanced medication understanding and use self-efficacy. This intervention is innovative as it combines health belief, motivation, and role model concept to trigger self-efficacy in maintaining healthy behaviors and better disease management. ACTRN (12618000174280).
Layout decomposition of self-aligned double patterning for 2D random logic patterning
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.
2011-04-01
Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.
Lessons from Jurassic Park: patients as complex adaptive systems.
Katerndahl, David A
2009-08-01
With realization that non-linearity is generally the rule rather than the exception in nature, viewing patients and families as complex adaptive systems may lead to a better understanding of health and illness. Doctors who successfully practise the 'art' of medicine may recognize non-linear principles at work without having the jargon needed to label them. Complex adaptive systems are systems composed of multiple components that display complexity and adaptation to input. These systems consist of self-organized components, which display complex dynamics, ranging from simple periodicity to chaotic and random patterns showing trends over time. Understanding the non-linear dynamics of phenomena both internal and external to our patients can (1) improve our definition of 'health'; (2) improve our understanding of patients, disease and the systems in which they converge; (3) be applied to future monitoring systems; and (4) be used to possibly engineer change. Such a non-linear view of the world is quite congruent with the generalist perspective.
Pan, David; Huey, Stanley J; Hernandez, Dominica
2011-01-01
This study is a 6-month follow-up of a randomized pilot evaluation of standard one-session treatment (OST-S) versus culturally adapted OST (OST-CA) with phobic Asian Americans. OST-CA included seven cultural adaptations drawn from prior research with East Asians and Asian Americans. Results from 1-week and 6-month follow-up show that both OST-S and OST-CA were effective at reducing phobic symptoms compared with self-help control. Moreover, OST-CA was superior to OST-S for several outcomes. For catastrophic thinking and general fear, moderator analyses indicated that low-acculturation Asian Americans benefitted more from OST-CA than OST-S, whereas both treatments were equally effective for high-acculturation participants. Although cultural process factors (e.g., facilitating emotional control, exploiting the vertical therapist-client relationship) and working alliance were predictive of positive outcomes, they did not mediate treatment effects. This study offers a potential model for evaluating cultural adaptation effects, as well as the mechanisms that account for such effects.
Sirey, Jo Anne; Halkett, Ashley; Chambers, Stephanie; Salamone, Aurora; Bruce, Martha L; Raue, Patrick J; Berman, Jacquelin
2015-01-01
The goal of this pilot program was to test the usefulness of adapted Problem-Solving Therapy (PST) and anxiety management, called PROTECT, integrated into elder abuse services to reduce depression and improve self-efficacy. Depressed women victims were randomized to receive elder abuse resolution services combined with either PROTECT or a mental health referral. At follow-up, the PROTECT group showed greater reduction in depressive symptoms and endorsed greater improved self-efficacy in problem-solving when compared to those in the Referral condition. These preliminary findings support the potential usefulness of PROTECT to alleviate depressive symptoms and enhance personal resources among abused older women.
Goldin, Philippe; Ziv, Michal; Jazaieri, Hooria; Gross, James J.
2012-01-01
Background: Social anxiety disorder (SAD) is characterized by distorted self-views. The goal of this study was to examine whether mindfulness-based stress reduction (MBSR) alters behavioral and brain measures of negative and positive self-views. Methods: Fifty-six adult patients with generalized SAD were randomly assigned to MBSR or a comparison aerobic exercise (AE) program. A self-referential encoding task was administered at baseline and post-intervention to examine changes in behavioral and neural responses in the self-referential brain network during functional magnetic resonance imaging. Patients were cued to decide whether positive and negative social trait adjectives were self-descriptive or in upper case font. Results: Behaviorally, compared to AE, MBSR produced greater decreases in negative self-views, and equivalent increases in positive self-views. Neurally, during negative self versus case, compared to AE, MBSR led to increased brain responses in the posterior cingulate cortex (PCC). There were no differential changes for positive self versus case. Secondary analyses showed that changes in endorsement of negative and positive self-views were associated with decreased social anxiety symptom severity for MBSR, but not AE. Additionally, MBSR-related increases in dorsomedial prefrontal cortex (DMPFC) activity during negative self-view versus case were associated with decreased social anxiety related disability and increased mindfulness. Analysis of neural temporal dynamics revealed MBSR-related changes in the timing of neural responses in the DMPFC and PCC for negative self-view versus case. Conclusion: These findings suggest that MBSR attenuates maladaptive habitual self-views by facilitating automatic (i.e., uninstructed) recruitment of cognitive and attention regulation neural networks. This highlights potentially important links between self-referential and cognitive-attention regulation systems and suggests that MBSR may enhance more adaptive social self-referential processes in patients with SAD. PMID:23133411
A Grassmann graph embedding framework for gait analysis
NASA Astrophysics Data System (ADS)
Connie, Tee; Goh, Michael Kah Ong; Teoh, Andrew Beng Jin
2014-12-01
Gait recognition is important in a wide range of monitoring and surveillance applications. Gait information has often been used as evidence when other biometrics is indiscernible in the surveillance footage. Building on recent advances of the subspace-based approaches, we consider the problem of gait recognition on the Grassmann manifold. We show that by embedding the manifold into reproducing kernel Hilbert space and applying the mechanics of graph embedding on such manifold, significant performance improvement can be obtained. In this work, the gait recognition problem is studied in a unified way applicable for both supervised and unsupervised configurations. Sparse representation is further incorporated in the learning mechanism to adaptively harness the local structure of the data. Experiments demonstrate that the proposed method can tolerate variations in appearance for gait identification effectively.
NASA Astrophysics Data System (ADS)
Chau, H. F.; Wang, Qinan; Wong, Cardythy
2017-02-01
Recently, Chau [Phys. Rev. A 92, 062324 (2015), 10.1103/PhysRevA.92.062324] introduced an experimentally feasible qudit-based quantum-key-distribution (QKD) scheme. In that scheme, one bit of information is phase encoded in the prepared state in a 2n-dimensional Hilbert space in the form (|i > ±|j >) /√{2 } with n ≥2 . For each qudit prepared and measured in the same two-dimensional Hilbert subspace, one bit of raw secret key is obtained in the absence of transmission error. Here we show that by modifying the basis announcement procedure, the same experimental setup can generate n bits of raw key for each qudit prepared and measured in the same basis in the noiseless situation. The reason is that in addition to the phase information, each qudit also carries information on the Hilbert subspace used. The additional (n -1 ) bits of raw key comes from a clever utilization of this extra piece of information. We prove the unconditional security of this modified protocol and compare its performance with other existing provably secure qubit- and qudit-based protocols on market in the one-way classical communication setting. Interestingly, we find that for the case of n =2 , the secret key rate of this modified protocol using nondegenerate random quantum code to perform one-way entanglement distillation is equal to that of the six-state scheme.
Hubig, Michael; Suchandt, Steffen; Adam, Nico
2004-10-01
Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.
Improved analysis of SP and CoSaMP under total perturbations
NASA Astrophysics Data System (ADS)
Li, Haifeng
2016-12-01
Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.
Supersensitive ancilla-based adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Larson, Walker; Saleh, Bahaa E. A.
2017-10-01
The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
Jang, J-H; Kim, H-Y; Shin, S-M; Lee, C-O; Kim, D S; Choi, K-K; Kim, S-Y
The aim of this randomized controlled clinical trial was to compare the clinical effectiveness of different polishing systems and self-etch adhesives in class V composite resin restorations. A total of 164 noncarious cervical lesions (NCCLs) from 35 patients were randomly allocated to one of four experimental groups, each of which used a combination of polishing systems and adhesives. The two polishing systems used were Sof-Lex XT (Sof), a multistep abrasive disc, and Enhance/Pogo (EP), a simplified abrasive-impregnated rubber instrument. The adhesive systems were Clearfil SE bond (CS), a two-step self-etch adhesive, and Xeno V (XE), a one-step self-etch adhesive. All NCCLs were restored with light-cured microhybrid resin composites (Z250). Restorations were evaluated at baseline and at 6, 12, 18, and 24 months by two blinded independent examiners using modified FDI criteria. The Fisher exact test and generalized estimating equation analysis considering repeated measurements were performed to compare the outcomes between the polishing systems and adhesives. Three restorations were dislodged: two in CS/Sof and one in CS/EP. None of the restorations required any repair or retreatment except those showing retention loss. Sof was superior to EP with regard to surface luster, staining, and marginal adaptation (p<0.05). CS and XE did not show differences in any criteria (p>0.05). Sof is clinically superior to EP for polishing performance in class V composite resin restoration. XE demonstrates clinically equivalent bonding performance to CS.
Using simulation pedagogy to teach clinical education skills: A randomized trial.
Holdsworth, Clare; Skinner, Elizabeth H; Delany, Clare M
2016-05-01
Supervision of students is a key role of senior physiotherapy clinicians in teaching hospitals. The objective of this study was to test the effect of simulated learning environments (SLE) on educators' self-efficacy in student supervision skills. A pilot prospective randomized controlled trial with concealed allocation was conducted. Clinical educators were randomized to intervention (SLE) or control groups. SLE participants completed two 3-hour workshops, which included simulated clinical teaching scenarios, and facilitated debrief. Standard Education (StEd) participants completed two online learning modules. Change in educator clinical supervision self-efficacy (SE) and student perceptions of supervisor skill were calculated. Between-group comparisons of SE change scores were analyzed with independent t-tests to account for potential baseline differences in education experience. Eighteen educators (n = 18) were recruited (SLE [n = 10], StEd [n = 8]). Significant improvements in SE change scores were seen in SLE participants compared to control participants in three domains of self-efficacy: (1) talking to students about supervision and learning styles (p = 0.01); (2) adapting teaching styles for students' individual needs (p = 0.02); and (3) identifying strategies for future practice while supervising students (p = 0.02). This is the first study investigating SLE for teaching skills of clinical education. SLE improved educators' self-efficacy in three domains of clinical education. Sample size limited the interpretation of student ratings of educator supervision skills. Future studies using SLE would benefit from future large multicenter trials evaluating its effect on educators' teaching skills, student learning outcomes, and subsequent effects on patient care and health outcomes.
The AFLOW Standard for High-throughput Materials Science Calculations
2015-01-01
84602, USA fDepartment of Physics and Department of Chemistry, University of North Texas, Denton, TX 76203, USA gMaterials Science, Electrical ...inversion in the iterative subspace (RMM– DIIS ) [10]. Of the two, DBS is known to be the slower and more stable option. Additionally, the subspace...RMM– DIIS steps as needed to fulfill the dEelec condition. Later determinations of system forces are performed by a similar sequence, but only a single
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yodgorov, G R; Ismail, F; Muminov, Z I
2014-12-31
We consider a certain model operator acting in a subspace of a fermionic Fock space. We obtain an analogue of Faddeev's equation. We describe the location of the essential spectrum of the operator under consideration and show that the essential spectrum consists of the union of at most four segments. Bibliography: 19 titles.
Pellitteri, Katelyn; Huberty, Jennifer; Ehlers, Diane; Bruening, Meg
Initial efficacy of a magazine-based discussion group for improving physical activity (PA), self-worth, and eating behaviors in female college freshmen. Randomized control trial. A large university in southwestern United States. Thirty-seven female college freshmen were randomized to the intervention (n = 17) and control groups (n = 20) in September 2013. Participants completed an 8-week magazine-based discussion group program, Fit Minded College Edition, adapted from Fit Minded, a previously tested theory-based intervention. Education on PA, self-worth, and nutrition was provided using excerpts from women's health magazines. Participants also had access to a Web site with supplementary health and wellness material. The control group did not attend meetings or have access to the Web site but received the magazines. Interventions focusing on concepts of self-worth with less focus on weight and appearance may promote long term PA participation and healthy eating behaviors in college women. Self-reported PA, global self-worth, knowledge self-worth, self-efficacy, social support, eating behaviors (ie, fruit/veggie/junk food/sugar-sweetened beverage consumption), satisfaction, and Web site usage. Mean age of participants was 18.11 (SD = 0.32) years. Time × Intervention effects were observed for PA minutes per week (Partial η = 0.34), knowledge self-worth (Partial η = 0.02), and daily sugar-sweetened beverage consumption (Partial η = 0.17) (P < .05), with the intervention group reporting greater increases in PA and knowledge self-worth and greater decreases in sugar-sweetened beverage consumption. A magazine-based discussion group may provide a promising platform to improve health behaviors in female college freshmen.
NASA Astrophysics Data System (ADS)
Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng
2017-07-01
Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit
2016-07-01
We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lin, Pao-Hwa; Intille, Stephen; Bennett, Gary; Bosworth, Hayden B; Corsino, Leonor; Voils, Corrine; Grambow, Steven; Lazenka, Tony; Batch, Bryan C; Tyson, Crystal; Svetkey, Laura P
2015-12-01
The obesity epidemic has spread to young adults, and obesity is a significant risk factor for cardiovascular disease. The prominence and increasing functionality of mobile phones may provide an opportunity to deliver longitudinal and scalable weight management interventions in young adults. The aim of this article is to describe the design and development of the intervention tested in the Cell Phone Intervention for You study and to highlight the importance of adaptive intervention design that made it possible. The Cell Phone Intervention for You study was a National Heart, Lung, and Blood Institute-sponsored, controlled, 24-month randomized clinical trial comparing two active interventions to a usual-care control group. Participants were 365 overweight or obese (body mass index≥25 kg/m2) young adults. Both active interventions were designed based on social cognitive theory and incorporated techniques for behavioral self-management and motivational enhancement. Initial intervention development occurred during a 1-year formative phase utilizing focus groups and iterative, participatory design. During the intervention testing, adaptive intervention design, where an intervention is updated or extended throughout a trial while assuring the delivery of exactly the same intervention to each cohort, was employed. The adaptive intervention design strategy distributed technical work and allowed introduction of novel components in phases intended to help promote and sustain participant engagement. Adaptive intervention design was made possible by exploiting the mobile phone's remote data capabilities so that adoption of particular application components could be continuously monitored and components subsequently added or updated remotely. The cell phone intervention was delivered almost entirely via cell phone and was always-present, proactive, and interactive-providing passive and active reminders, frequent opportunities for knowledge dissemination, and multiple tools for self-tracking and receiving tailored feedback. The intervention changed over 2 years to promote and sustain engagement. The personal coaching intervention, alternatively, was primarily personal coaching with trained coaches based on a proven intervention, enhanced with a mobile application, but where all interactions with the technology were participant-initiated. The complexity and length of the technology-based randomized clinical trial created challenges in engagement and technology adaptation, which were generally discovered using novel remote monitoring technology and addressed using the adaptive intervention design. Investigators should plan to develop tools and procedures that explicitly support continuous remote monitoring of interventions to support adaptive intervention design in long-term, technology-based studies, as well as developing the interventions themselves. © The Author(s) 2015.
Effects of Self-Image on Anxiety, Judgement Bias and Emotion Regulation in Social Anxiety Disorder.
Lee, Hannah; Ahn, Jung-Kwang; Kwon, Jung-Hye
2018-04-25
Research to date has focused on the detrimental effects of negative self-images for individuals with social anxiety disorder (SAD), but the benefits of positive self-images have been neglected. The present study examined the effect of holding a positive versus negative self-image in mind on anxiety, judgement bias and emotion regulation (ER) in individuals with SAD. Forty-two individuals who met the diagnostic criteria for SAD were randomly assigned to either a positive or a negative self-image group. Participants were assessed twice with a week's interval in between using the Reactivity and Regulation Situation Task, which measures social anxiety, discomfort, judgement bias and ER, prior to and after the inducement of a positive or negative self-image. Individuals in the positive self-image group reported less social anxiety, discomfort and distress from social cost when compared with their pre-induction state. They also used more adaptive ER strategies and experienced less anxiety and discomfort after using ER. In contrast, individuals in the negative self-image group showed no significant differences in anxiety, judgement bias or ER strategies before and after the induction. This study highlights the beneficial effects of positive self-images on social anxiety and ER.
Hou, Kun-Mean; Zhang, Zhan
2017-01-01
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem. PMID:29120357
Zhou, Peng; Zuo, Decheng; Hou, Kun-Mean; Zhang, Zhan
2017-11-09
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem.
Torkaman, Mahya; Miri, Sakineh; Farokhzadian, Jamileh
2018-02-12
Background Reduction of the adaptation and self-esteem can be the consequence of opium addiction and imprisonment. Drug use causes inappropriate behaviors in women, which are quite different from those in men. Social deviations, prostitution, high-risk sexual behaviors, abortion, divorce and imprisonment followed by loss of self-esteem are the consequences of women's addiction. The present study was conducted to assess the relationship between adaptation and self-esteem in addicted female prisoners. Methods In this descriptive analytical study, 130 addicted female prisoners were selected from a prison in the south east of Iran using census sampling. The data were collected by a demographic questionnaire, the Rosenberg's self-esteem scale and the bell adjustment inventory (BAI). Results According to the results, women's adaptation fell into the 'very unsatisfactory' range. The highest mean was related to the emotional dimension, while the lowest mean was in terms of the health dimension. In total, 96.4% of the participating women had low adaptation. The mean total self-esteem fell into the low range; in fact, 84.6% of the women had a low self-esteem. The results showed no significant relationships between adaptation and self-esteem in these women; however, self-esteem was significantly and inversely related to health and emotional adaptation. Conclusion The findings showed that the majority of the women had unsatisfactory adaptation as well as poor self-esteem. No significant relationships were observed between adaptation and self-esteem in the addicted female prisoners.
Hyperspectral image compressing using wavelet-based method
NASA Astrophysics Data System (ADS)
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
Mohsenipouya, Hossein; Majlessi, Fereshteh; Ghafari, Rahman
2018-01-01
Background and aim Post-operative self-care behaviors, have positive effects on increase in adaptability, and reduce cardiac surgery patients’ disability. The present study is carried out aimed at determining the effect of education based on a health promotion model on the patients’ self-care behaviors after coronary artery bypass surgery. Methods This is a semi-experimental study carried out in Mazandaran (Iran) in 2016. Two hundred and twenty patients who participated in the study were selected using a simple random sampling method from a population of postoperative patients, and divided into control and experimental groups (110 patients in each) using block (AABB) randomization. Self-designed self-care questionnaires based on a health promotion model were distributed among the patients once before and three months after intervention. The data were analyzed by SPSS-22, Chi-Square tests, Mann-Whitney and ANCOVA at the significance level of p<0.05. Results The average score of total self-care behaviors in cardiac surgery patients was not significant between the two groups before education (p=0.065), but after training, a significant difference was observed between the two groups (p<0.001). The analysis of ANOVA with repeated measure indicated that following the intervention, significant difference was observed between the two groups in terms of improvement of self-care behaviors after excluding the effect of pre-test and controlling demographic and health-related characteristics. Conclusions Developing and implementing a training program based on the health promotion model can enhance self-care behaviors and reduce the number of admissions in patients after cardiac surgery. PMID:29588828
Ruminative and mindful self-focused attention in borderline personality disorder.
Sauer, Shannon E; Baer, Ruth A
2012-10-01
The current study investigated the short-term effects of mindful and ruminative forms of self-focused attention on a behavioral measure of distress tolerance in individuals with borderline personality disorder (BPD) who had completed an angry mood induction. Participants included 40 individuals who met criteria for BPD and were currently involved in mental health treatment. Each completed an individual 1-hr session. Following an angry mood induction, each participant was randomly assigned to engage in ruminative or mindful self-focus for several minutes. All participants then completed the computerized Paced Auditory Serial Addition Test (PASAT-C), a behavioral measure of willingness to tolerate distress in the service of goal-directed behavior. The mindfulness group persisted significantly longer than the rumination group on the distress tolerance task and reported significantly lower levels of anger following the self-focus period. Results are consistent with previous studies in suggesting that distinct forms of self-focused attention have distinct outcomes and that, for people with BPD, mindful self-observation is an adaptive alternative to rumination when feeling angry. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
The Strong African American Families program: longitudinal pathways to sexual risk reduction.
Murry, Velma McBride; Berkel, Cady; Brody, Gene H; Gerrard, Meg; Gibbons, Meg; Gibbons, Frederick X
2007-10-01
To identify the mechanisms by which intervention-induced increases in adaptive parenting were associated with a reduction in sexual risk behavior among rural African American adolescents across a 29-month period. African American families (N = 284) with 11-year-old children in nine rural Georgian counties participated in the 7-week Strong African American Families (SAAF) project. Counties were randomly assigned to intervention or control conditions. The program was evaluated via pretest, posttest, and long-term follow-up interview data collected in the families' homes. The current paper tests a hypothetical model of program efficacy, positing that intervention-induced changes in parenting behaviors would enhance in youth self-pride, which in turn would forecast changes in sexual behaviors measured 29 months after pretest. Compared with controls, parents who participated in SAAF reported increased adaptive universal and racially specific parenting. Furthermore, intervention-induced changes in these parenting behaviors were associated indirectly with sexual risk behavior through adolescent self-pride, peer orientation, and sexual intent. Culturally competent programs, developed through empirical and theoretical research within affected communities, can foster adaptive universal and racially specific parenting, which can have a long-term effect on adolescent sexual risk behavior. Effective strategies for designing and implementing culturally competent programs are discussed.
NASA Astrophysics Data System (ADS)
Yaremchuk, Max; Martin, Paul; Beattie, Christopher
2017-09-01
Development and maintenance of the linearized and adjoint code for advanced circulation models is a challenging issue, requiring a significant proportion of total effort in operational data assimilation (DA). The ensemble-based DA techniques provide a derivative-free alternative, which appears to be competitive with variational methods in many practical applications. This article proposes a hybrid scheme for generating the search subspaces in the adjoint-free 4-dimensional DA method (a4dVar) that does not use a predefined ensemble. The method resembles 4dVar in that the optimal solution is strongly constrained by model dynamics and search directions are supplied iteratively using information from the current and previous model trajectories generated in the process of optimization. In contrast to 4dVar, which produces a single search direction from exact gradient information, a4dVar employs an ensemble of directions to form a subspace in order to proceed. In the earlier versions of a4dVar, search subspaces were built using the leading EOFs of either the model trajectory or the projections of the model-data misfits onto the range of the background error covariance (BEC) matrix at the current iteration. In the present study, we blend both approaches and explore a hybrid scheme of ensemble generation in order to improve the performance and flexibility of the algorithm. In addition, we introduce balance constraints into the BEC structure and periodically augment the search ensemble with BEC eigenvectors to avoid repeating minimization over already explored subspaces. Performance of the proposed hybrid a4dVar (ha4dVar) method is compared with that of standard 4dVar in a realistic regional configuration assimilating real data into the Navy Coastal Ocean Model (NCOM). It is shown that the ha4dVar converges faster than a4dVar and can be potentially competitive with 4dvar both in terms of the required computational time and the forecast skill.
Symons Downs, Danielle; Savage, Jennifer S; Rivera, Daniel E; Smyth, Joshua M; Rolls, Barbara J; Hohman, Emily E; McNitt, Katherine M; Kunselman, Allen R; Stetter, Christy; Pauley, Abigail M; Leonard, Krista S; Guo, Penghong
2018-06-08
High gestational weight gain is a major public health concern as it independently predicts adverse maternal and infant outcomes. Past interventions have had only limited success in effectively managing pregnancy weight gain, especially among women with overweight and obesity. Well-designed interventions are needed that take an individualized approach and target unique barriers to promote healthy weight gain. The primary aim of the study is to describe the study protocol for Healthy Mom Zone, an individually tailored, adaptive intervention for managing weight in pregnant women with overweight and obesity. The Healthy Mom Zone Intervention, based on theories of planned behavior and self-regulation and a model of energy balance, includes components (eg, education, self-monitoring, physical activity/healthy eating behaviors) that are adapted over the intervention (ie, increase in intensity) to better regulate weight gain. Decision rules inform when to adapt the intervention. In this randomized controlled trial, women are randomized to the intervention or standard care control group. The intervention is delivered from approximately 8-36 weeks gestation and includes step-ups in dosages (ie, Step-up 1 = education + physical activity + healthy eating active learning [cooking/recipes]; Step-up 2 = Step-up 1 + portion size, physical activity; Step-up 3 = Step-up 1 + 2 + grocery store feedback, physical activity); 5 maximum adaptations. Study measures are obtained at pre- and postintervention as well as daily (eg, weight), weekly (eg, energy intake/expenditure), and monthly (eg, psychological) over the study period. Analyses will include linear mixed-effects models, generalized estimating equations, and dynamical modeling to understand between-group and within-individual effects of the intervention on weight gain. Recruitment of 31 pregnant women with overweight and obesity has occurred from January 2016 through July 2017. Baseline data have been collected for all participants. To date, 24 participants have completed the intervention and postintervention follow-up assessments, 3 are currently in progress, 1 dropped out, and 3 women had early miscarriages and are no longer active in the study. Of the 24 participants, 13 women have completed the intervention to date, of which 1 (8%, 1/13) received only the baseline intervention, 3 (23%, 3/13) received baseline + step-up 1, 6 (46%, 6/13) received baseline + step-up 1 + step-up 2, and 3 (23%, 3/13) received baseline + step-up 1 + step-up 2 +step-up 3. Data analysis is still ongoing through spring 2018. This is one of the first intervention studies to use an individually tailored, adaptive design to manage weight gain in pregnancy. Results from this study will be useful in designing a larger randomized trial to examine efficacy of this intervention and developing strategies for clinical application. RR1-10.2196/9220. ©Danielle Symons Downs, Jennifer S Savage, Daniel E Rivera, Joshua M Smyth, Barbara J Rolls, Emily E Hohman, Katherine M McNitt, Allen R Kunselman, Christy Stetter, Abigail M Pauley, Krista S Leonard, Penghong Guo. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 08.06.2018.
An Investigation of the Reliability and Self-Regulatory Correlates of Conflict Adaptation.
Feldman, Julia L; Freitas, Antonio L
2016-07-01
The study of the conflict-adaptation effect, in which encountering information-processing conflict attenuates the disruptive influence of information-processing conflicts encountered subsequently, is a burgeoning area of research. The present study investigated associations among performance measures on a Stroop-trajectory task (measuring Stroop interference and conflict adaptation), on a Wisconsin Card Sorting Task (WCST; measuring cognitive flexibility), and on self-reported measures of self-regulation (including impulsivity and tenacity). We found significant reliability of the conflict-adaptation effects across a two-week period, for response-time and accuracy. Variability in conflict adaptation was not associated significantly with any indicators of performance on the WCST or with most of the self-reported self-regulation measures. There was substantial covariance between Stroop interference for accuracy and conflict adaptation for accuracy. The lack of evidence of covariance across distinct aspects of cognitive control (conflict adaptation, WCST performance, self-reported self-control) may reflect the operation of relatively independent component processes.
Zeid, Elias Abou; Sereshkeh, Alborz Rezazadeh; Chau, Tom
2016-12-01
In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
NASA Astrophysics Data System (ADS)
Abou Zeid, Elias; Rezazadeh Sereshkeh, Alborz; Chau, Tom
2016-12-01
Objective. In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. Approach. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. Main results. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Significance. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
Ferguson, Robert J; Sigmon, Sandra T; Pritchard, Andrew J; LaBrie, Sharon L; Goetze, Rachel E; Fink, Christine M; Garrett, A Merrill
2016-06-01
Long-term chemotherapy-related cognitive dysfunction (CRCD) affects a large number of cancer survivors. To the authors' knowledge, to date there is no established treatment for this survivorship problem. The authors herein report results of a small randomized controlled trial of a cognitive behavioral therapy (CBT), Memory and Attention Adaptation Training (MAAT), compared with an attention control condition. Both treatments were delivered over a videoconference device. A total of 47 survivors of female breast cancer who reported CRCD were randomized to MAAT or supportive therapy and were assessed at baseline, after treatment, and at 2 months of follow-up. Participants completed self-report measures of cognitive symptoms and quality of life and a brief telephone-based neuropsychological assessment. MAAT participants made gains in perceived (self-reported) cognitive impairments (P = .02), and neuropsychological processing speed (P = .03) compared with supportive therapy controls. A large MAAT effect size was observed at the 2-month follow-up with regard to anxiety concerning cognitive problems (Cohen's d for standard differences in effect sizes, 0.90) with medium effects noted in general function, fatigue, and anxiety. Survivors rated MAAT and videoconference delivery with high satisfaction. MAAT may be an efficacious psychological treatment of CRCD that can be delivered through videoconference technology. This research is important because it helps to identify a treatment option for survivors that also may improve access to survivorship services. Cancer 2016;122:1782-91. © 2016 American Cancer Society. © 2016 American Cancer Society.
Lee, So-Mi; Kim, Jong-Hee
2016-06-01
This study aims to verify the effectiveness of sleep education by identifying the differences of adaption to school and self-resilience of the high school students before and after sleep education for a certain period of time. The conclusion of this study is presented below: First, there were differences in adaptation to school and self-resilience of the high school students before and after sleep education for the high school students. After sleep education, adaptation to school environment and school friends became higher, and also the emotion control, personal relations and optimism, which are the subvariables of self-resilience, became higher. Second, there were differences in adaptation to school and self-resilience before and after sleep education by grade of the high school students. The freshmen's adaptation to school friends and adaptation to school life, which are the subvariables of adaptation to school, increased after sleep education. The sophomores' adaptation to school environment, which is the subvariable of adaptation to school, went up higher after sleep education. The freshmen's emotion control, vitality and personal relations, which are the subvariables of self-resilience, were higher after sleep education. The sophomores' personal relations, which are a subvariable of self-resilience, went up higher.
Stochastic Models of Emerging Infectious Disease Transmission on Adaptive Random Networks
Pipatsart, Navavat; Triampo, Wannapong
2017-01-01
We presented adaptive random network models to describe human behavioral change during epidemics and performed stochastic simulations of SIR (susceptible-infectious-recovered) epidemic models on adaptive random networks. The interplay between infectious disease dynamics and network adaptation dynamics was investigated in regard to the disease transmission and the cumulative number of infection cases. We found that the cumulative case was reduced and associated with an increasing network adaptation probability but was increased with an increasing disease transmission probability. It was found that the topological changes of the adaptive random networks were able to reduce the cumulative number of infections and also to delay the epidemic peak. Our results also suggest the existence of a critical value for the ratio of disease transmission and adaptation probabilities below which the epidemic cannot occur. PMID:29075314
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1994-01-01
This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.
NASA Astrophysics Data System (ADS)
Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel
2010-12-01
A double-atom partitioning of the molecular one-electron density matrix is used to describe atoms and bonds. All calculations are performed in Hilbert space. The concept of atomic weight functions (familiar from Hirshfeld analysis of the electron density) is extended to atomic weight matrices. These are constructed to be orthogonal projection operators on atomic subspaces, which has significant advantages in the interpretation of the bond contributions. In close analogy to the iterative Hirshfeld procedure, self-consistency is built in at the level of atomic charges and occupancies. The method is applied to a test set of about 67 molecules, representing various types of chemical binding. A close correlation is observed between the atomic charges and the Hirshfeld-I atomic charges.
Tachyon condensation due to domain-wall annihilation in Bose-Einstein condensates.
Takeuchi, Hiromitsu; Kasamatsu, Kenichi; Tsubota, Makoto; Nitta, Muneto
2012-12-14
We show theoretically that a domain-wall annihilation in two-component Bose-Einstein condensates causes tachyon condensation accompanied by spontaneous symmetry breaking in a two-dimensional subspace. Three-dimensional vortex formation from domain-wall annihilations is considered a kink formation in subspace. Numerical experiments reveal that the subspatial dynamics obey the dynamic scaling law of phase-ordering kinetics. This model is experimentally feasible and provides insights into how the extra dimensions influence subspatial phase transition in higher-dimensional space.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
Haggerty, Kevin P; Barkan, Susan E; Skinner, Martie; Ben Packard, W; Cole, Janice J
2016-01-01
To test the feasibility, usability, and proximal outcomes of Connecting, an adaptation of a low-cost, self-directed, family-based substance use prevention program, Staying Connected with Your Teen, with foster families in a randomized, waitlist control pilot study. Families (n = 60) fostering teens between 11 and 15 years of age were recruited into the study and randomly assigned into the self-administered program with telephone support from a family consultant (n = 32) or a waitlist control condition (n = 28). Overall satisfaction with the program was high, with 100% of parents reporting they would recommend the program to other caregivers and reporting being "very satisfied" or "satisfied with the program. Program completion was good, with 62% of families completing all 91 specified tasks. Analyses of proximal outcomes revealed increased communication about sex and substance use (posttest1 OR = 1.97, and 2.03, respectively). Teens in the intervention vs. the waitlist condition reported lower family conflict (OR=.48), and more family rules related to monitoring (OR = 4.02) and media use (OR = 3.24). Caregivers in the waitlist group reported significant increases in the teen's positive involvements (partial eta sq = 17% increase) after receiving the intervention. Overall, program participation appeared to lead to stronger family management, better communication between teens and caregivers around monitoring and media use, teen participation in setting family rules, and decreased teen attitudes favorable to antisocial behavior. This small pilot study shows promising results for this adapted program.
Babatunde, Oyinlola T; Himburg, Susan P; Newman, Frederick L; Campa, Adriana; Dixon, Zisca
2011-01-01
To assess the effectiveness of an osteoporosis education program to improve calcium intake, knowledge, and self-efficacy in community-dwelling older Black adults. Randomized repeated measures experimental design. Churches and community-based organizations. Men and women (n = 110) 50 years old and older from 3 south Florida counties. Participants randomly assigned to either of 2 groups: Group 1 (experimental group) or Group 2 (wait-list control group). Group 1 participated in 6 weekly education program sessions immediately following baseline assessment, and Group 2 started the program following Group 1's program completion. A tested curriculum was adapted to meet the needs of the target population. Dietary calcium intake, osteoporosis knowledge, health beliefs, and self-efficacy. Descriptive and summary statistics, repeated measures analysis of variance, and regression analysis. Of the total participants, 84.6% completed the study (mean age = 70.2 years). Overall, an educational program developed with a theoretical background was associated with improvement in calcium intake, knowledge, and self-efficacy, with no effect on most health belief subscales. Assigned group was the major predictor of change in calcium intake. A theory-driven approach is valuable in improving behavior to promote bone health in this population. Health professionals should consider using more theory-driven approaches in intervention studies. Copyright © 2011 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure
Dharan, Smitha; Nair, Achuthsankar S
2009-01-01
Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127
Application of Bred Vectors To Data Assimilation
NASA Astrophysics Data System (ADS)
Corazza, M.; Kalnay, E.; Patil, Dj
We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 18351851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3
NASA Astrophysics Data System (ADS)
Parand, K.; Nikarya, M.
2017-11-01
In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
Yu, Nancy X.; Lam, T. H.; Liu, Iris K. F.; Stewart, Sunita M.
2015-01-01
Few clinical trials report on the active intervention components that result in outcome changes, although this is relevant to further improving efficacy and adapting effective programs to other populations. This paper presents follow-up analyses of a randomized controlled trial to enhance adaptation by increasing knowledge and personal resilience in two separate brief interventions with immigrants from Mainland China to Hong Kong (Yu et al., 2014b). The present paper extends our previous one by reporting on the longer term effect of the interventions on personal resilience, and examining whether the Resilience intervention worked as designed to enhance personal resilience. The four-session intervention targeted at self-efficacy, positive thinking, altruism, and goal setting. In this randomized controlled trial, 220 immigrants were randomly allocated to three arms: Resilience, Information (an active control arm), and Control arms. Participants completed measures of the four active components (self-efficacy, positive thinking, altruism, and goal setting) at baseline and immediately after the intervention. Personal resilience was assessed at baseline, post-intervention, and 3- and 6-month follow-ups. The results showed that the Resilience arm had greater increases in the four active components post-intervention. Changes in each of the four active components at the post-intervention assessment mediated enhanced personal resilience at the 3-month follow-up in the Resilience arm. Changes in self-efficacy and goal setting showed the largest effect size, and altruism showed the smallest. The arm effects of the Resilience intervention on enhanced personal resilience at the 6-month follow-up were mediated by increases of personal resilience post-intervention (Resilience vs. Control) and at the 3-month follow-up (Resilience vs. Information). These findings showed that these four active components were all mediators in this Resilience intervention. Our results of the effects of short term increases in personal resilience on longer term increase in personal resilience in some models suggest how changes in intervention outcomes might persist over time. PMID:26640446
Quantum search algorithms on a regular lattice
NASA Astrophysics Data System (ADS)
Hein, Birgit; Tanner, Gregor
2010-07-01
Quantum algorithms for searching for one or more marked items on a d-dimensional lattice provide an extension of Grover’s search algorithm including a spatial component. We demonstrate that these lattice search algorithms can be viewed in terms of the level dynamics near an avoided crossing of a one-parameter family of quantum random walks. We give approximations for both the level splitting at the avoided crossing and the effectively two-dimensional subspace of the full Hilbert space spanning the level crossing. This makes it possible to give the leading order behavior for the search time and the localization probability in the limit of large lattice size including the leading order coefficients. For d=2 and d=3, these coefficients are calculated explicitly. Closed form expressions are given for higher dimensions.
Jørgensen, R; Licht, R W; Lysaker, P H; Munk-Jørgensen, P; Buck, K D; Jensen, S O W; Hansson, L; Zoffmann, V
2015-07-01
Poor insight has a negative impact on the outcome in schizophrenia; consequently, poor insight is a logical target for treatment. However, neither medication nor psychosocial interventions have been demonstrated to improve poor insight. A method originally designed for diabetes patients to improve their illness management, Guided Self-Determination (GSD), has been adapted for use in patients with schizophrenia (GSD-SZ). The purpose of this study was to investigate the effect on insight of GSD-SZ as a supplement to treatment as usual (TAU) as compared to TAU alone in outpatients diagnosed with schizophrenia. The design was an open randomized trial. The primary hypothesis was cognitive insight would improve in those patients who received GSD-SZ+TAU as assessed by the BCIS. We additionally explored whether the intervention led to changes in clinical insight, self-perceived recovery, self-esteem, social functioning and symptom severity. Assessments were conducted at baseline, and at 3-, 6- and 12-month follow-up. Analysis was based on the principles of intention to treat and potential confounders were taken into account through applying a multivariate approach. A total of 101 participants were randomized to GSD-SZ+TAU (n=50) or to TAU alone (n=51). No statistically significant differences were found on the cognitive insight. However, at 12-month follow-up, clinical insight (measured by G12 from the Positive and Negative Syndrome Scale), symptom severity, and social functioning had statistically significantly improved in the intervention group as compared to the control group. "Improving insight in patients diagnosed with schizophrenia", NCT01282307, http://clinicaltrials.gov/. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Naslund, John A.; Aschbrenner, Kelly A.; Marsch, Lisa A.; McHugo, Gregory J.; Bartels, Stephen J.
2015-01-01
Objective Online crowdsourcing refers to the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of people over the Internet. We examined the potential for using online crowdsourcing methods for conducting behavioral health intervention research among people with serious mental illness (SMI). Methods Systematic review of randomized trials using online crowdsourcing methods for recruitment, intervention delivery, and data collection in people with SMI, including schizophrenia spectrum disorders and mood disorders. Included studies were completed entirely over the Internet without any face-to-face contact between participants and researchers. Databases and sources Medline, Cochrane Library, Web of Science, CINAHL, Scopus, PsychINFO, Google Scholar, and reference lists of relevant articles. Results We identified 7 randomized trials that enrolled N=1,214 participants (range: 39 to 419) with SMI. Participants were mostly female (72%) and had mood disorders (94%). Attrition ranged from 14% to 81%. Three studies had attrition rates below 25%. Most interventions were adapted from existing evidence-based programs, and consisted of self-directed education, psychoeducation, self-help, and illness self-management. Six studies collected self-reported mental health symptoms, quality of life, and illness severity. Three studies supported intervention effectiveness and two studies showed improvements in the intervention and comparison conditions over time. Peer support emerged as an important component of several interventions. Overall, studies were of medium to high methodological quality. Conclusion Online crowdsourcing methods appear feasible for conducting intervention research in people with SMI. Future efforts are needed to improve retention rates, collect objective outcome measures, and reach a broader demographic. PMID:26188164
Information-theoretic limitations on approximate quantum cloning and broadcasting
NASA Astrophysics Data System (ADS)
Lemm, Marius; Wilde, Mark M.
2017-07-01
We prove quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well-known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of n qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control of the performance of generalized universal quantum cloning machines (UQCMs) on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a priori information in the context of cloning. For example, we can control the performance of an antisymmetric analog of the UQCM in recovering from the loss of n -k fermionic particles.
Qudit-Basis Universal Quantum Computation Using χ^{(2)} Interactions.
Niu, Murphy Yuezhen; Chuang, Isaac L; Shapiro, Jeffrey H
2018-04-20
We prove that universal quantum computation can be realized-using only linear optics and χ^{(2)} (three-wave mixing) interactions-in any (n+1)-dimensional qudit basis of the n-pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ^{(2)} Hamiltonians and photon-number operators generate the full u(3) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ^{(2)} interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ^{(2)} interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
NASA Astrophysics Data System (ADS)
Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.
2018-05-01
An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
Qudit-Basis Universal Quantum Computation Using χ(2 ) Interactions
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-04-01
We prove that universal quantum computation can be realized—using only linear optics and χ(2 ) (three-wave mixing) interactions—in any (n +1 )-dimensional qudit basis of the n -pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ(2 ) Hamiltonians and photon-number operators generate the full u (3 ) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ(2 ) interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ(2 ) interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
Current harmonics elimination control method for six-phase PM synchronous motor drives.
Yuan, Lei; Chen, Ming-liang; Shen, Jian-qing; Xiao, Fei
2015-11-01
To reduce the undesired 5th and 7th stator harmonic current in the six-phase permanent magnet synchronous motor (PMSM), an improved vector control algorithm was proposed based on vector space decomposition (VSD) transformation method, which can control the fundamental and harmonic subspace separately. To improve the traditional VSD technology, a novel synchronous rotating coordinate transformation matrix was presented in this paper, and only using the traditional PI controller in d-q subspace can meet the non-static difference adjustment, the controller parameter design method is given by employing internal model principle. Moreover, the current PI controller parallel with resonant controller is employed in x-y subspace to realize the specific 5th and 7th harmonic component compensation. In addition, a new six-phase SVPWM algorithm based on VSD transformation theory is also proposed. Simulation and experimental results verify the effectiveness of current decoupling vector controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
Hardware-efficient Bell state preparation using Quantum Zeno Dynamics in superconducting circuits
NASA Astrophysics Data System (ADS)
Flurin, Emmanuel; Blok, Machiel; Hacohen-Gourgy, Shay; Martin, Leigh S.; Livingston, William P.; Dove, Allison; Siddiqi, Irfan
By preforming a continuous joint measurement on a two qubit system, we restrict the qubit evolution to a chosen subspace of the total Hilbert space. This extension of the quantum Zeno effect, called Quantum Zeno Dynamics, has already been explored in various physical systems such as superconducting cavities, single rydberg atoms, atomic ensembles and Bose Einstein condensates. In this experiment, two superconducting qubits are strongly dispersively coupled to a high-Q cavity (χ >> κ) allowing for the doubly excited state | 11 〉 to be selectively monitored. The Quantum Zeno Dynamics in the complementary subspace enables us to coherently prepare a Bell state. As opposed to dissipation engineering schemes, we emphasize that our protocol is deterministic, does not rely direct coupling between qubits and functions only using single qubit controls and cavity readout. Such Quantum Zeno Dynamics can be generalized to larger Hilbert space enabling deterministic generation of many-body entangled states, and thus realizes a decoherence-free subspace allowing alternative noise-protection schemes.
How prevention curricula are taught under real-world conditions
Miller-Day, Michelle; Pettigrew, Jonathan; Hecht, Michael L.; Shin, YoungJu; Graham, John; Krieger, Janice
2015-01-01
Purpose As interventions are disseminated widely, issues of fidelity and adaptation become increasingly critical to understand. This study aims to describe the types of adaptations made by teachers delivering a school-based substance use prevention curriculum and their reasons for adapting program content. Design/methodology/approach To determine the degree to which implementers adhere to a prevention curriculum, naturally adapt the curriculum, and the reasons implementers give for making adaptations, the study examined lesson adaptations made by the 31 teachers who implemented the keepin' it REAL drug prevention curriculum in 7th grade classrooms (n = 25 schools). Data were collected from teacher self-reports after each lesson and observer coding of videotaped lessons. From the total sample, 276 lesson videos were randomly selected for observational analysis. Findings Teachers self-reported adapting more than 68 percent of prevention lessons, while independent observers reported more than 97 percent of the observed lessons were adapted in some way. Types of adaptations included: altering the delivery of the lesson by revising the delivery timetable or delivery context; changing content of the lesson by removing, partially covering, revising, or adding content; and altering the designated format of the lesson (such as assigning small group activities to students as individual work). Reasons for adaptation included responding to constraints (time, institutional, personal, and technical), and responding to student needs (students' abilities to process curriculum content, to enhance student engagement with material). Research limitations/implications The study sample was limited to rural schools in the US mid-Atlantic; however, the results suggest that if programs are to be effectively implemented, program developers need a better understanding of the types of adaptations and reasons implementers provide for adapting curricula. Practical implications These descriptive data suggest that prevention curricula be developed in shorter teaching modules, developers reconsider the usefulness of homework, and implementer training and ongoing support might benefit from more attention to different implementation styles. Originality/value With nearly half of US public schools implementing some form of evidence-based substance use prevention program, issues of implementation fidelity and adaptation have become paramount in the field of prevention. The findings from this study reveal the complexity of the types of adaptations teachers make naturally in the classroom to evidence-based curricula and provide reasons for these adaptations. This information should prove useful for prevention researchers, program developers, and health educators alike. PMID:26290626
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel
Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less
NASA Astrophysics Data System (ADS)
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
THE IMPACT OF A VIDEO INTERVENTION ON THE USE OF LOW VISION ASSISTIVE DEVICES
Goldstein, Robert B.; Dugan, Elizabeth; Trachtenberg, Felicia; Peli, Eli
2006-01-01
Purpose An image-enhanced educational and motivational video was developed for patients with low vision and their caretakers. Impact on knowledge, self-efficacy, and attitudes was assessed. Methods The video incorporated cognitive restructuring to change emotional response; a “virtual home”; a veridical simulation of vision with AMD and contrast enhancement of the video. Subjects (median age 77.5) were randomized into control (N=79) and intervention (N=75) groups. Telephone interviews were at baseline, 2 weeks and 3 months. Main outcome measures were: knowledge (8 questions), self-efficacy score (7 questions), adaptive behaviors (10 questions), willingness to use devices, and emotional response (4-point scales). Results The intervention group showed a statistically significant improvement in knowledge, (difference of 1.1 out of 8 questions, p < 0.001). Change in use of books-on-tape was more for the intervention group than for controls (p=0.005). The intervention group increased use of books-on-tape from 28% to 51% whereas the control group did not (34% at both times). However, there was no significant change in the use of other assistive devices, including magnifiers. Both groups increased adaptive behaviors. There was no significant difference in change of self-efficacy score or in emotional affect between the two groups. Conclusions The video had small, but statistically significant impact on knowledge and willingness to use assistive devices. There was little impact on adaptive behaviors and emotional affect. The minimal changes in outcome were disappointing, but this does not minimize the importance of patient education, it just emphasizes how hard it is to effect change. PMID:17435527
Trust as commodity: social value orientation affects the neural substrates of learning to cooperate
Declerck, Carolyn H.; Emonds, Griet; Boone, Christophe
2017-01-01
Abstract Individuals differ in their motives and strategies to cooperate in social dilemmas. These differences are reflected by an individual’s social value orientation: proselfs are strategic and motivated to maximize self-interest, while prosocials are more trusting and value fairness. We hypothesize that when deciding whether or not to cooperate with a random member of a defined group, proselfs, more than prosocials, adapt their decisions based on past experiences: they ‘learn’ instrumentally to form a base-line expectation of reciprocity. We conducted an fMRI experiment where participants (19 proselfs and 19 prosocials) played 120 sequential prisoner’s dilemmas against randomly selected, anonymous and returning partners who cooperated 60% of the time. Results indicate that cooperation levels increased over time, but that the rate of learning was steeper for proselfs than for prosocials. At the neural level, caudate and precuneus activation were more pronounced for proselfs relative to prosocials, indicating a stronger reliance on instrumental learning and self-referencing to update their trust in the cooperative strategy. PMID:28119509
Trust as commodity: social value orientation affects the neural substrates of learning to cooperate.
Lambert, Bruno; Declerck, Carolyn H; Emonds, Griet; Boone, Christophe
2017-04-01
Individuals differ in their motives and strategies to cooperate in social dilemmas. These differences are reflected by an individual's social value orientation: proselfs are strategic and motivated to maximize self-interest, while prosocials are more trusting and value fairness. We hypothesize that when deciding whether or not to cooperate with a random member of a defined group, proselfs, more than prosocials, adapt their decisions based on past experiences: they 'learn' instrumentally to form a base-line expectation of reciprocity. We conducted an fMRI experiment where participants (19 proselfs and 19 prosocials) played 120 sequential prisoner's dilemmas against randomly selected, anonymous and returning partners who cooperated 60% of the time. Results indicate that cooperation levels increased over time, but that the rate of learning was steeper for proselfs than for prosocials. At the neural level, caudate and precuneus activation were more pronounced for proselfs relative to prosocials, indicating a stronger reliance on instrumental learning and self-referencing to update their trust in the cooperative strategy. © The Author (2017). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Performance feedback, self-esteem, and cardiovascular adaptation to recurring stressors.
Brown, Eoin G; Creaven, Ann-Marie
2017-05-01
This study sought to examine the effects of performance feedback and individual differences in self-esteem on cardiovascular habituation to repeat stress exposure. Sixty-six university students (n = 39 female) completed a self-esteem measure and completed a cardiovascular stress-testing protocol involving repeated exposure to a mental arithmetic task. Cardiovascular functioning was sampled across four phases: resting baseline, initial stress exposure, a recovery period, and repeated stress exposure. Participants were randomly assigned to receive fictional positive feedback, negative feedback, or no feedback following the recovery period. Negative feedback was associated with a sensitized blood pressure response to a second exposure of the stress task. Positive feedback was associated with decreased cardiovascular and psychological responses to a second exposure. Self-esteem was also found to predict reactivity and this interacted with the type of feedback received. These findings suggest that negative performance feedback sensitizes cardiovascular reactivity to stress, whereas positive performance feedback increases both cardiovascular and psychological habituation to repeat exposure to stressors. Furthermore, an individual's self-esteem also appears to influence this process.
Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.
Kilic, Niyazi; Hosgormez, Erkan
2016-03-01
Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.
NASA Astrophysics Data System (ADS)
Wang, Zhe; Wang, Wen-Qin; Shao, Huaizong
2016-12-01
Different from the phased-array using the same carrier frequency for each transmit element, the frequency diverse array (FDA) uses a small frequency offset across the array elements to produce range-angle-dependent transmit beampattern. FDA radar provides new application capabilities and potentials due to its range-dependent transmit array beampattern, but the FDA using linearly increasing frequency offsets will produce a range and angle coupled transmit beampattern. In order to decouple the range-azimuth beampattern for FDA radar, this paper proposes a uniform linear array (ULA) FDA using Costas-sequence modulated frequency offsets to produce random-like energy distribution in the transmit beampattern and thumbtack transmit-receive beampattern. In doing so, the range and angle of targets can be unambiguously estimated through matched filtering and subspace decomposition algorithms in the receiver signal processor. Moreover, random-like energy distributed beampattern can also be utilized for low probability of intercept (LPI) radar applications. Numerical results show that the proposed scheme outperforms the standard FDA in focusing the transmit energy, especially in the range dimension.
Gestalt Intervention Groups for Anxious Parents in Hong Kong: A Quasi-Experimental Design.
Leung, Grace Suk Man; Khor, Su Hean
2017-01-01
This study examined the impact of gestalt intervention groups for anxious Chinese parents in Hong Kong. A non-randomized control group pre-test/post-test design was adopted. A total of 156 parents participated in the project. After 4 weeks of treatment, the intervention group participants had lower anxiety levels, less avoidance of inner experiences, and more kindness towards oneself and mindfulness when compared to control group participants. However, the dimension of self-judgment remained unchanged. The adaptation of gestalt intervention to suit the Chinese culture was discussed.
Savaşan, Ayşegül; Çam, Olcay
2017-06-01
People with alcohol dependency have lower self-esteem than controls and when their alcohol use increases, their self-esteem decreases. Coping skills in alcohol related issues are predicted to reduce vulnerability to relapse. It is important to adapt care to individual needs so as to prevent a return to the cycle of alcohol use. The Tidal Model focuses on providing support and services to people who need to live a constructive life. The aim of the randomized study was to determine the effect of the psychiatric nursing approach based on the Tidal Model on coping and self-esteem in people with alcohol dependency. The study was semi-experimental in design with a control group, and was conducted on 36 individuals (18 experimental, 18 control). An experimental and a control group were formed by assigning persons to each group using the stratified randomization technique in the order in which they were admitted to hospital. The Coping Inventory (COPE) and the Coopersmith Self-Esteem Inventory (CSEI) were used as measurement instruments. The measurement instruments were applied before the application and three months after the application. In addition to routine treatment and follow-up, the psychiatric nursing approach based on the Tidal Model was applied to the experimental group in the One-to-One Sessions. The psychiatric nursing approach based on the Tidal Model is an approach which is effective in increasing the scores of people with alcohol dependency in positive reinterpretation and growth, active coping, restraint, emotional social support and planning and reducing their scores in behavioral disengagement. It was seen that self-esteem rose, but the difference from the control group did not reach significance. The psychiatric nursing approach based on the Tidal Model has an effect on people with alcohol dependency in maintaining their abstinence. The results of the study may provide practices on a theoretical basis for improving coping behaviors and self-esteem and facilitating the recovery process of alcohol dependents with implications for mental health nursing. Copyright © 2017 Elsevier Inc. All rights reserved.
Lawson, Nathaniel C.; Robles, Augusto; Fu, Chin-Chuan; Lin, Chee Paul; Sawlani, Kanchan; Burgess, John O.
2016-01-01
Objectives To compare the clinical performance of Scotchbond™ Universal Adhesive used in self- and total-etch modes and two-bottle Scotchbond™ Multi-purpose Adhesive in total-etch mode for Class 5 non-carious cervical lesions (NCCLs). Methods 37 adults were recruited with 3 or 6 NCCLs (>1.5 mm deep). Teeth were isolated, and a short cervical bevel was prepared. Teeth were restored randomly with Scotchbond Universal total-etch, Scotchbond Universal self-etch or Scotchbond Multi-purpose followed with a composite resin. Restorations were evaluated at baseline, 6, 12 and 24 months for marginal adaptation, marginal discoloration, secondary caries, and sensitivity to cold using modified USPHS Criteria. Patients and evaluators were blinded. Logistic and linear regression models using a generalized estimating equation were applied to evaluate the effects of time and adhesive material on clinical assessment outcomes over the 24 month follow-up period. Kaplan–Meier method was used to compare the retention between adhesive materials. Results Clinical performance of all adhesive materials deteriorated over time for marginal adaptation, and discoloration (p <0.0001). Both Scotchbond Universal self-etch and Scotchbond Multi-purpose materials were more than three times as likely to contribute to less satisfying performance in marginal discoloration over time than Scotchbond Universal total-etch. The retention rates up to 24 months were 87.6%, 94.9% and 100% for Scotchbond Multi-purpose and Scotchbond Universal self-etch and total-etch, respectively. Conclusions Scotchbond Universal in self- and total- etch modes performed similar to or better than Scotchbond Multipurpose, respectively. Clinical significance 24 month evaluation of a universal adhesive indicates acceptable clinical performance, particularly in a total-etch mode. PMID:26231300
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
NASA Astrophysics Data System (ADS)
Wang, F.; Huang, Y.-Y.; Zhang, Z.-Y.; Zu, C.; Hou, P.-Y.; Yuan, X.-X.; Wang, W.-B.; Zhang, W.-G.; He, L.; Chang, X.-Y.; Duan, L.-M.
2017-10-01
We experimentally demonstrate room-temperature storage of quantum entanglement using two nuclear spins weakly coupled to the electronic spin carried by a single nitrogen-vacancy center in diamond. We realize universal quantum gate control over the three-qubit spin system and produce entangled states in the decoherence-free subspace of the two nuclear spins. By injecting arbitrary collective noise, we demonstrate that the decoherence-free entangled state has coherence time longer than that of other entangled states by an order of magnitude in our experiment.
Handley, Elizabeth D.; Michl-Petzing, Louisa C.; Rogosch, Fred A.; Cicchetti, Dante; Toth, Sheree L.
2016-01-01
Using a developmental cascades framework, the current study investigated whether treating maternal depression via interpersonal psychotherapy (IPT) may lead to more widespread positive adaptation for offspring and mothers including benefits to toddler attachment and temperament, and maternal parenting self-efficacy. The participants (N=125 mother-child dyads, mean mother age at baseline=25.43 years; 54.4% of mothers were African-American; mean offspring age at baseline=13.23 months) were from a randomized controlled trial (RCT) of IPT for a sample of racially and ethnically diverse, socioeconomically disadvantaged mothers of infants. Mothers were randomized to IPT (n=97) or an enhanced community standard (ECS) control group (n=28). Results of complier average causal effect (CACE) modeling showed that engagement with IPT led to significant decreases in maternal depressive symptoms at post-treatment. Moreover, reductions in maternal depression post-treatment were associated with less toddler disorganized attachment characteristics, more adaptive maternal perceptions of toddler temperament, and improved maternal parenting efficacy eight months following the completion of treatment. Our findings contribute to the emerging literature documenting the potential benefits to children of successfully treating maternal depression. Alleviating maternal depression appears to initiate a cascade of positive adaptation among both mothers and offspring, which may alter the well-documented risk trajectory for offspring of depressed mothers. PMID:28401849
Schaub, Michael P; Tiburcio, Marcela; Martinez, Nora; Ambekar, Atul; Balhara, Yatan Pal Singh; Wenger, Andreas; Monezi Andrade, André Luiz; Padruchny, Dzianis; Osipchik, Sergey; Gehring, Elise; Poznyak, Vladimir; Rekve, Dag; Souza-Formigoni, Maria Lucia Oliveira
2018-02-01
Given the scarcity of alcohol prevention and alcohol use disorder treatments in many low and middle-income countries, the World Health Organization launched an e-health portal on alcohol and health that includes a Web-based self-help program. This paper presents the protocol for a multicentre randomized controlled trial (RCT) to test the efficacy of the internet-based self-help intervention to reduce alcohol use. Two-arm randomized controlled trial (RCT) with follow-up 6 months after randomization. Community samples in middle-income countries. People aged 18+, with Alcohol Use Disorders Identification Test (AUDIT) scores of 8+ indicating hazardous alcohol consumption. Offer of an internet-based self-help intervention, 'Alcohol e-Health', compared with a 'waiting list' control group. The intervention, adapted from a previous program with evidence of effectiveness in a high-income country, consists of modules to reduce or entirely stop drinking. The primary outcome measure is change in the Alcohol Use Disorders Identification Test (AUDIT) score assessed at 6-month follow-up. Secondary outcomes include self-reported the numbers of standard drinks and alcohol-free days in a typical week during the past 6 months, and cessation of harmful or hazardous drinking (AUDIT < 8). Data analysis will be by intention-to-treat, using analysis of covariance to test if program participants will experience a greater reduction in their AUDIT score than controls at follow-up. Secondary outcomes will be analysed by (generalized) linear mixed models. Complier average causal effect and baseline observations carried forward will be used in sensitivity analyses. If the Alcohol e-Health program is found to be effective, the potential public health impact of its expansion into countries with underdeveloped alcohol prevention and alcohol use disorder treatment systems world-wide is considerable. © 2017 Society for the Study of Addiction.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Spontaneous Self-Distancing and Adaptive Self-Reflection Across Adolescence
White, Rachel E.; Kross, Ethan; Duckworth, Angela L.
2015-01-01
Experiments performed primarily with adults show that self-distancing facilitates adaptive self-reflection. However, no research has investigated whether adolescents spontaneously engage in this process or whether doing so is linked to adaptive outcomes. In this study, 226 African American adolescents, aged 11 to 20, reflected on an anger-related interpersonal experience. As expected, spontaneous self-distancing during reflection predicted lower levels of emotional reactivity by leading adolescents to reconstrue (rather than recount) their experience and blame their partner less. Moreover, the inverse relation between self-distancing and emotional reactivity strengthened with age. These findings highlight the role that self-distancing plays in fostering adaptive self-reflection in adolescence, and begin to elucidate the role that development plays in enhancing the benefits of engaging in this process. PMID:25876213
Hierarchical random walks in trace fossils and the origin of optimal search behavior
Sims, David W.; Reynolds, Andrew M.; Humphries, Nicolas E.; Southall, Emily J.; Wearmouth, Victoria J.; Metcalfe, Brett; Twitchett, Richard J.
2014-01-01
Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa. PMID:25024221
Connor, Jason T; Elm, Jordan J; Broglio, Kristine R
2013-08-01
We present a novel Bayesian adaptive comparative effectiveness trial comparing three treatments for status epilepticus that uses adaptive randomization with potential early stopping. The trial will enroll 720 unique patients in emergency departments and uses a Bayesian adaptive design. The trial design is compared to a trial without adaptive randomization and produces an efficient trial in which a higher proportion of patients are likely to be randomized to the most effective treatment arm while generally using fewer total patients and offers higher power than an analogous trial with fixed randomization when identifying a superior treatment. When one treatment is superior to the other two, the trial design provides better patient care, higher power, and a lower expected sample size. Copyright © 2013 Elsevier Inc. All rights reserved.
Developmental cascade effects of the New Beginnings Program on adolescent adaptation outcomes.
McClain, Darya Bonds; Wolchik, Sharlene A; Winslow, Emily; Tein, Jenn-Yun; Sandler, Irwin N; Millsap, Roger E
2010-11-01
Using data from a 6-year longitudinal follow-up sample of 240 youth who participated in a randomized experimental trial of a preventive intervention for divorced families with children ages 9-12, the current study tested alternative cascading pathways by which the intervention decreased symptoms of internalizing disorders, symptoms of externalizing disorders, substance use, and risky sexual behavior and increased self-esteem and academic performance in mid- to late adolescence (15-19 years old). It was hypothesized that the impact of the program on adolescent adaptation outcomes would be explained by progressive associations between program-induced changes in parenting and youth adaptation outcomes. The results supported a cascading model of program effects in which the program was related to increased mother-child relationship quality that was related to subsequent decreases in child internalizing problems, which then was related to subsequent increases in self-esteem and decreases in symptoms of internalizing disorders in adolescence. The results were also consistent with a model in which the program increased maternal effective discipline that was related to decreased child externalizing problems, which was related to subsequent decreases in symptoms of externalizing disorders, less substance use, and better academic performance in adolescence. There were no significant differences in the model based on level of baseline risk or adolescent gender. These results provide support for a cascading pathways model of child and adolescent development.
Garnefski, N; Kraaij, V; Benoist, M; Bout, Z; Karels, E; Smit, A
2013-07-01
The aim of this study was to investigate whether a new cognitive-behavioral self-help program with minimal coaching could improve psychological well-being (depression, anxiety, and coping self-efficacy) in people with rheumatic disease and depressive symptoms. In total, 82 persons with a rheumatic disease enrolled in a randomized controlled trial were allocated to either a group receiving the self-help program or a waiting list control condition group. For both groups, measurements were done at baseline, posttest, and followup. The outcome measures were the depression and anxiety scales of the Hospital Anxiety and Depression Scale and an adaptation of the Generalized Self-Efficacy Scale. Repeated-measures analyses of covariance were performed to evaluate changes in outcome measures from pretest to posttest and from posttest to followup. The results showed that the self-help program was effective in reducing symptoms of depression and anxiety and in strengthening coping self-efficacy. The positive effects remained after a followup period of 2 months. This cost-effective program could very well be used as a first step in a stepped care approach or as one of the treatment possibilities in a matched care approach. Copyright © 2013 by the American College of Rheumatology.
NASA Astrophysics Data System (ADS)
Aster, R. C.; McMahon, N. D.; Myers, E. K.; Lough, A. C.
2015-12-01
Lough et al. (2014) first detected deep sub-icecap magmatic events beneath the Executive Committee Range volcanoes of Marie Byrd Land. Here, we extend the identification and analysis of these events in space and time utilizing subspace detection. Subspace detectors provide a highly effective methodology for studying events within seismic swarms that have similar moment tensor and Green's function characteristics and are particularly effective for identifying low signal-to-noise events. Marie Byrd Land (MBL) is an extremely remote continental region that is nearly completely covered by the West Antarctic Ice Sheet (WAIS). The southern extent of Marie Byrd Land lies within the West Antarctic Rift System (WARS), which includes the volcanic Executive Committee Range (ECR). The ECR shows north-to-south progression of volcanism across the WARS during the Holocene. In 2013, the POLENET/ANET seismic data identified two swarms of seismic activity in 2010 and 2011. These events have been interpreted as deep, long-period (DLP) earthquakes based on depth (25-40 km) and low frequency content. The DLP events in MBL lie beneath an inferred sub-WAIS volcanic edifice imaged with ice penetrating radar and have been interpreted as a present location of magmatic intrusion. The magmatic swarm activity in MBL provides a promising target for advanced subspace detection and temporal, spatial, and event size analysis of an extensive deep long period earthquake swarm using a remote seismographic network. We utilized a catalog of 1,370 traditionally identified DLP events to construct subspace detectors for the six nearest stations and analyzed two years of data spanning 2010-2011. Association of these detections into events resulted in an approximate ten-fold increase in number of locatable earthquakes. In addition to the two previously identified swarms during early 2010 and early 2011, we find sustained activity throughout the two years of study that includes several previously unidentified periods of heightened activity. Correlation with large global earthquakes suggests that the DLP activity is not sensitive to remote teleseismic triggering.
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.
Single-step controlled-NOT logic from any exchange interaction
NASA Astrophysics Data System (ADS)
Galiautdinov, Andrei
2007-11-01
A self-contained approach to studying the unitary evolution of coupled qubits is introduced, capable of addressing a variety of physical systems described by exchange Hamiltonians containing Rabi terms. The method automatically determines both the Weyl chamber steering trajectory and the accompanying local rotations. Particular attention is paid to the case of anisotropic exchange with tracking controls, which is solved analytically. It is shown that, if computational subspace is well isolated, any exchange interaction can always generate high fidelity, single-step controlled-NOT (CNOT) logic, provided that both qubits can be individually manipulated. The results are then applied to superconducting qubit architectures, for which several CNOT gate implementations are identified. The paper concludes with consideration of two CNOT gate designs having high efficiency and operating with no significant leakage to higher-lying noncomputational states.
Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.
2017-09-01
Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.
Using task dynamics to quantify the affordances of throwing for long distance and accuracy.
Wilson, Andrew D; Weightman, Andrew; Bingham, Geoffrey P; Zhu, Qin
2016-07-01
In 2 experiments, the current study explored how affordances structure throwing for long distance and accuracy. In Experiment 1, 10 expert throwers (from baseball, softball, and cricket) threw regulation tennis balls to hit a vertically oriented 4 ft × 4 ft target placed at each of 9 locations (3 distances × 3 heights). We measured their release parameters (angle, speed, and height) and showed that they scaled their throws in response to changes in the target's location. We then simulated the projectile motion of the ball and identified a continuous subspace of release parameters that produce hits to each target location. Each subspace describes the affordance of our target to be hit by a tennis ball moving in a projectile motion to the relevant location. The simulated affordance spaces showed how the release parameter combinations required for hits changed with changes in the target location. The experts tracked these changes in their performance and were successful in hitting the targets. We next tested unusual (horizontal) targets that generated correspondingly different affordance subspaces to determine whether the experts would track the affordance to generate successful hits. Do the experts perceive the affordance? They do. In Experiment 2, 5 cricketers threw to hit either vertically or horizontally oriented targets and successfully hit both, exhibiting release parameters located within the requisite affordance subspaces. We advocate a task dynamical approach to the study of affordances as properties of objects and events in the context of tasks as the future of research in this area. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Barkan, Susan E.; Skinner, Martie; Ben Packard, W.; Cole, Janice J.
2016-01-01
Objective To test the feasibility, usability, and proximal outcomes of Connecting, an adaptation of a low-cost, self-directed, family-based substance use prevention program, Staying Connected with Your Teen, with foster families in a randomized, waitlist control pilot study. Method Families (n = 60) fostering teens between 11 and 15 years of age were recruited into the study and randomly assigned into the self-administered program with telephone support from a family consultant (n = 32) or a waitlist control condition (n = 28). Results Overall satisfaction with the program was high, with 100% of parents reporting they would recommend the program to other caregivers and reporting being “very satisfied” or “satisfied with the program. Program completion was good, with 62% of families completing all 91 specified tasks. Analyses of proximal outcomes revealed increased communication about sex and substance use (posttest1 OR = 1.97, and 2.03, respectively). Teens in the intervention vs. the waitlist condition reported lower family conflict (OR=.48), and more family rules related to monitoring (OR = 4.02) and media use (OR = 3.24). Caregivers in the waitlist group reported significant increases in the teen’s positive involvements (partial eta sq = 17% increase) after receiving the intervention. Conclusions Overall, program participation appeared to lead to stronger family management, better communication between teens and caregivers around monitoring and media use, teen participation in setting family rules, and decreased teen attitudes favorable to antisocial behavior. This small pilot study shows promising results for this adapted program. PMID:27891209
Spontaneous Self-Distancing and Adaptive Self-Reflection Across Adolescence.
White, Rachel E; Kross, Ethan; Duckworth, Angela L
2015-07-01
Experiments performed primarily with adults show that self-distancing facilitates adaptive self-reflection. However, no research has investigated whether adolescents spontaneously engage in this process or whether doing so is linked to adaptive outcomes. In this study, 226 African American adolescents, aged 11-20, reflected on an anger-related interpersonal experience. As expected, spontaneous self-distancing during reflection predicted lower levels of emotional reactivity by leading adolescents to reconstrue (rather than recount) their experience and blame their partner less. Moreover, the inverse relation between self-distancing and emotional reactivity strengthened with age. These findings highlight the role that self-distancing plays in fostering adaptive self-reflection in adolescence, and begin to elucidate the role that development plays in enhancing the benefits of engaging in this process. © 2015 The Authors. Child Development © 2015 Society for Research in Child Development, Inc.
Ab initio results for intermediate-mass, open-shell nuclei
NASA Astrophysics Data System (ADS)
Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.
2017-01-01
A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.
Saper, Robert B; Sherman, Karen J; Delitto, Anthony; Herman, Patricia M; Stevans, Joel; Paris, Ruth; Keosaian, Julia E; Cerrada, Christian J; Lemaster, Chelsey M; Faulkner, Carol; Breuer, Maya; Weinberg, Janice
2014-02-26
Chronic low back pain causes substantial morbidity and cost to society while disproportionately impacting low-income and minority adults. Several randomized controlled trials show yoga is an effective treatment. However, the comparative effectiveness of yoga and physical therapy, a common mainstream treatment for chronic low back pain, is unknown. This is a randomized controlled trial for 320 predominantly low-income minority adults with chronic low back pain, comparing yoga, physical therapy, and education. Inclusion criteria are adults 18-64 years old with non-specific low back pain lasting ≥ 12 weeks and a self-reported average pain intensity of ≥ 4 on a 0-10 scale. Recruitment takes place at Boston Medical Center, an urban academic safety-net hospital and seven federally qualified community health centers located in diverse neighborhoods. The 52-week study has an initial 12-week Treatment Phase where participants are randomized in a 2:2:1 ratio into i) a standardized weekly hatha yoga class supplemented by home practice; ii) a standardized evidence-based exercise therapy protocol adapted from the Treatment Based Classification method, individually delivered by a physical therapist and supplemented by home practice; and iii) education delivered through a self-care book. Co-primary outcome measures are 12-week pain intensity measured on an 11-point numerical rating scale and back-specific function measured using the modified Roland Morris Disability Questionnaire. In the subsequent 40-week Maintenance Phase, yoga participants are re-randomized in a 1:1 ratio to either structured maintenance yoga classes or home practice only. Physical therapy participants are similarly re-randomized to either five booster sessions or home practice only. Education participants continue to follow recommendations of educational materials. We will also assess cost effectiveness from the perspectives of the individual, insurers, and society using claims databases, electronic medical records, self-report cost data, and study records. Qualitative data from interviews will add subjective detail to complement quantitative data. This trial is registered in ClinicalTrials.gov, with the ID number: NCT01343927.
2014-01-01
Background Chronic low back pain causes substantial morbidity and cost to society while disproportionately impacting low-income and minority adults. Several randomized controlled trials show yoga is an effective treatment. However, the comparative effectiveness of yoga and physical therapy, a common mainstream treatment for chronic low back pain, is unknown. Methods/Design This is a randomized controlled trial for 320 predominantly low-income minority adults with chronic low back pain, comparing yoga, physical therapy, and education. Inclusion criteria are adults 18–64 years old with non-specific low back pain lasting ≥12 weeks and a self-reported average pain intensity of ≥4 on a 0–10 scale. Recruitment takes place at Boston Medical Center, an urban academic safety-net hospital and seven federally qualified community health centers located in diverse neighborhoods. The 52-week study has an initial 12-week Treatment Phase where participants are randomized in a 2:2:1 ratio into i) a standardized weekly hatha yoga class supplemented by home practice; ii) a standardized evidence-based exercise therapy protocol adapted from the Treatment Based Classification method, individually delivered by a physical therapist and supplemented by home practice; and iii) education delivered through a self-care book. Co-primary outcome measures are 12-week pain intensity measured on an 11-point numerical rating scale and back-specific function measured using the modified Roland Morris Disability Questionnaire. In the subsequent 40-week Maintenance Phase, yoga participants are re-randomized in a 1:1 ratio to either structured maintenance yoga classes or home practice only. Physical therapy participants are similarly re-randomized to either five booster sessions or home practice only. Education participants continue to follow recommendations of educational materials. We will also assess cost effectiveness from the perspectives of the individual, insurers, and society using claims databases, electronic medical records, self-report cost data, and study records. Qualitative data from interviews will add subjective detail to complement quantitative data. Trial registration This trial is registered in ClinicalTrials.gov, with the ID number: NCT01343927. PMID:24568299
Correlational Neural Networks.
Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman
2016-02-01
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING
Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin
2018-01-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-01-01
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313
NASA Astrophysics Data System (ADS)
Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.
2015-12-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
Stochastic subspace identification for operational modal analysis of an arch bridge
NASA Astrophysics Data System (ADS)
Loh, Chin-Hsiung; Chen, Ming-Che; Chao, Shu-Hsien
2012-04-01
In this paer the application of output-only system identification technique, known as Stochastic Subspace Identification (SSI) algorithms, for civil infrastructures is carried out. The ability of covariance driven stochastic subspace identification (SSI-COV) was proved through the analysis of the ambient data of an arch bridge under operational condition. A newly developed signal processing technique, Singular Spectrum analysis (SSA), capable to smooth noisy signals, is adopted for pre-processing the recorded data before the SSI. The conjunction of SSA and SSICOV provides a useful criterion for the system order determination. With the aim of estimating accurate modal parameters of the structure in off-line analysis, a stabilization diagram is constructed by plotting the identified poles of the system with increasing the size of data Hankel matrix. Identification task of a real structure, Guandu Bridge, is carried out to identify the system natural frequencies and mode shapes. The uncertainty of the identified model parameters from output-only measurement of the bridge under operation condition, such as temperature and traffic loading conditions, is discussed.
A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-02-25
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.
Semi-Supervised Projective Non-Negative Matrix Factorization for Cancer Classification.
Zhang, Xiang; Guan, Naiyang; Jia, Zhilong; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Advances in DNA microarray technologies have made gene expression profiles a significant candidate in identifying different types of cancers. Traditional learning-based cancer identification methods utilize labeled samples to train a classifier, but they are inconvenient for practical application because labels are quite expensive in the clinical cancer research community. This paper proposes a semi-supervised projective non-negative matrix factorization method (Semi-PNMF) to learn an effective classifier from both labeled and unlabeled samples, thus boosting subsequent cancer classification performance. In particular, Semi-PNMF jointly learns a non-negative subspace from concatenated labeled and unlabeled samples and indicates classes by the positions of the maximum entries of their coefficients. Because Semi-PNMF incorporates statistical information from the large volume of unlabeled samples in the learned subspace, it can learn more representative subspaces and boost classification performance. We developed a multiplicative update rule (MUR) to optimize Semi-PNMF and proved its convergence. The experimental results of cancer classification for two multiclass cancer gene expression profile datasets show that Semi-PNMF outperforms the representative methods.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.