Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379
A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.
Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam
2012-01-01
Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles. PMID:22723895
NASA Astrophysics Data System (ADS)
Greco, Mario; Huebner, Claudia; Marchi, Gabriele
2008-10-01
In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
NASA Astrophysics Data System (ADS)
Li, Can; Joiner, Joanna; Krotkov, Nickolay; Fioletov, Vitali; McLinden, Chris
2015-04-01
We report on the latest progress in the development and application of a new trace gas retrieval algorithm for spaceborne UV-VIS spectrometers. Developed at NASA Goddard Space Flight Center, this algorithm utilizes the principal component analysis (PCA) technique to extract a series of spectral features (principal components or PCs) explaining the variance of measured reflectance spectra. For a species of interests that has no or very small background signals such as SO2 or HCHO, the leading PCs (that explain the most variance) obtained over the clean areas are generally associated with various physical processes (e.g., ozone absorption, rotational Raman scattering) and measurement details (e.g., wavelength shift) other than the signals of interests. By fitting these PCs and pre-computed Jacobians for the target species to a measured radiance spectrum, we can then estimate its atmospheric loading. The PCA algorithm has been operationally implemented to produce the new generation NASA Aura/OMI standard planetary boundary layer (PBL) SO2 product. Comparison with the previous OMI PBL SO2 product indicates that the PCA algorithm reduces the retrieval noise by a factor of two and greatly improves the data quality, allowing detection of smaller point SO2 pollution sources that have not been previously measured from space. We have also demonstrated the algorithm for SO2 retrievals using the new NASA/NOAA S-NPP/OMPS UV spectrometer. For HCHO, the new algorithm shows great promise as evidenced by results obtained from both OMI and OMPS. Finally, we discuss the most recent progress in the algorithm development, including the implementation of a new Jacobians lookup table to more appropriately account for the sensitivity of satellite sensors to various measurement conditions (e.g., viewing geometry, surface reflectance and cloudiness).
Fast Steerable Principal Component Analysis
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-01-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
NASA Astrophysics Data System (ADS)
Silveira, Landulfo, Jr.; Silveira, Fabrício L.; Bodanese, Benito; Pacheco, Marcos Tadeu T.; Zângaro, Renato A.
2012-02-01
This work demonstrated the discrimination among basal cell carcinoma (BCC) and normal human skin in vivo using near-infrared Raman spectroscopy. Spectra were obtained in the suspected lesion prior resectional surgery. After tissue withdrawn, biopsy fragments were submitted to histopathology. Spectra were also obtained in the adjacent, clinically normal skin. Raman spectra were measured using a Raman spectrometer (830 nm) with a fiber Raman probe. By comparing the mean spectra of BCC with the normal skin, it has been found important differences in the 800-1000 cm-1 and 1250-1350 cm-1 (vibrations of C-C and amide III, respectively, from lipids and proteins). A discrimination algorithm based on Principal Components Analysis and Mahalanobis distance (PCA/MD) could discriminate the spectra of both tissues with high sensitivity and specificity.
Principal component analysis implementation in Java
NASA Astrophysics Data System (ADS)
Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz
2015-09-01
In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.
Alán, Lukáš; Špaček, Tomáš; Ježek, Petr
2016-07-01
Data segmentation and object rendering is required for localization super-resolution microscopy, fluorescent photoactivation localization microscopy (FPALM), and direct stochastic optical reconstruction microscopy (dSTORM). We developed and validated methods for segmenting objects based on Delaunay triangulation in 3D space, followed by facet culling. We applied them to visualize mitochondrial nucleoids, which confine DNA in complexes with mitochondrial (mt) transcription factor A (TFAM) and gene expression machinery proteins, such as mt single-stranded-DNA-binding protein (mtSSB). Eos2-conjugated TFAM visualized nucleoids in HepG2 cells, which was compared with dSTORM 3D-immunocytochemistry of TFAM, mtSSB, or DNA. The localized fluorophores of FPALM/dSTORM data were segmented using Delaunay triangulation into polyhedron models and by principal component analysis (PCA) into general PCA ellipsoids. The PCA ellipsoids were normalized to the smoothed volume of polyhedrons or by the net unsmoothed Delaunay volume and remodeled into rotational ellipsoids to obtain models, termed DVRE. The most frequent size of ellipsoid nucleoid model imaged via TFAM was 35 × 45 × 95 nm; or 35 × 45 × 75 nm for mtDNA cores; and 25 × 45 × 100 nm for nucleoids imaged via mtSSB. Nucleoids encompassed different point density and wide size ranges, speculatively due to different activity stemming from different TFAM/mtDNA stoichiometry/density. Considering twofold lower axial vs. lateral resolution, only bulky DVRE models with an aspect ratio >3 and tilted toward the xy-plane were considered as two proximal nucleoids, suspicious occurring after division following mtDNA replication. The existence of proximal nucleoids in mtDNA-dSTORM 3D images of mtDNA "doubling"-supported possible direct observations of mt nucleoid division after mtDNA replication. PMID:26846371
NASA Technical Reports Server (NTRS)
Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.
2013-01-01
We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.
Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Vu; Duong, Tuan
2005-01-01
A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.
Complex Principal Components for Robust Motion Estimation
Mauldin, F. William; Viola, Francesco; Walker, William F.
2010-01-01
Bias and variance errors in motion estimation result from electronic noise, decorrelation, aliasing, and inherent algorithm limitations. Unlike most error sources, decorrelation is coherent over time and has the same power spectrum as the signal. Thus, reducing decorrelation is impossible through frequency domain filtering or simple averaging and must be achieved through other methods. In this paper, we present a novel motion estimator, termed the principal component displacement estimator (PCDE), which takes advantage of the signal separation capabilities of principal component analysis (PCA) to reject decorrelation and noise. Furthermore, PCDE only requires the computation of a single principal component, enabling computational speed that is on the same order of magnitude or faster than the commonly used Loupas algorithm. Unlike prior PCA strategies, PCDE uses complex data to generate motion estimates using only a single principal component. The use of complex echo data is critical because it allows for separation of signal components based on motion, which is revealed through phase changes of the complex principal components. PCDE operates on the assumption that the signal component of interest is also the most energetic component in an ensemble of echo data. This assumption holds in most clinical ultrasound environments. However, in environments where electronic noise SNR is less than 0 dB or in blood flow data for which the wall signal dominates the signal from blood flow, the calculation of more than one PC is required to obtain the signal of interest. We simulated synthetic ultrasound data to assess the performance of PCDE over a wide range of imaging conditions and in the presence of decorrelation and additive noise. Under typical ultrasonic elasticity imaging conditions (0.98 signal correlation, 25 dB SNR, 1 sample shift), PCDE decreased estimation bias by more than 10% and standard deviation by more than 30% compared with the Loupas method and normalized
Nonlinear principal component analysis of climate data
Boyle, J.; Sengupta, S.
1995-06-01
This paper presents the details of the nonlinear principal component analysis of climate data. Topic discussed include: connection with principal component analysis; network architecture; analysis of the standard routine (PRINC); and results.
Nonlinear Principal Components Analysis: Introduction and Application
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.
2007-01-01
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…
Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1997-01-01
The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results
Sparse principal component analysis by choice of norm
Luo, Ruiyan; Zhao, Hongyu
2012-01-01
Recent years have seen the developments of several methods for sparse principal component analysis due to its importance in the analysis of high dimensional data. Despite the demonstration of their usefulness in practical applications, they are limited in terms of lack of orthogonality in the loadings (coefficients) of different principal components, the existence of correlation in the principal components, the expensive computation needed, and the lack of theoretical results such as consistency in high-dimensional situations. In this paper, we propose a new sparse principal component analysis method by introducing a new norm to replace the usual norm in traditional eigenvalue problems, and propose an efficient iterative algorithm to solve the optimization problems. With this method, we can efficiently obtain uncorrelated principal components or orthogonal loadings, and achieve the goal of explaining a high percentage of variations with sparse linear combinations. Due to the strict convexity of the new norm, we can prove the convergence of the iterative method and provide the detailed characterization of the limits. We also prove that the obtained principal component is consistent for a single component model in high dimensional situations. As illustration, we apply this method to real gene expression data with competitive results. PMID:23524453
Principal component analysis of phenolic acid spectra
Technology Transfer Automated Retrieval System (TEKTRAN)
Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes. PMID:21110579
PROJECTED PRINCIPAL COMPONENT ANALYSIS IN FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Wang, Weichen
2016-01-01
This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates’ effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index. PMID:26783374
Insights Into Categorization Of Solar Flares Using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Balasubramaniam, K. S.; Norquist, D. C.
2012-05-01
Using time sequences of solar chromospheric images acquired using the USAF/NSO Improved Solar Observing Network (ISOON) prototype telescope, we have applied principal component analysis (PCA) to time-series of both erupting and non-erupting active regions. Our primary purpose is to develop an advanced data driven model for solar flare prediction using machine learning algorithms, with principal components as the input. Using the principal components we show a clear separation in the Eigen vectors. Eigen vectors fall into three major flaring categories: weak flares (GOES peak intensity < C4.0; intermediary flares (GOES peak intensity between C4.0 and C8.0) and, strong flares (GOES peak intensity > C8.0). In this paper, we will provide insights into implications for the underlying physical mechanisms that describe these three distinct categories. This work funded by the U. S. Air Force Office of Scientific Research (AFOSR).
Hockey sticks, principal components, and spurious significance
NASA Astrophysics Data System (ADS)
McIntyre, Stephen; McKitrick, Ross
2005-02-01
The ``hockey stick'' shaped temperature reconstruction of Mann et al. (1998, 1999) has been widely applied. However it has not been previously noted in print that, prior to their principal components (PCs) analysis on tree ring networks, they carried out an unusual data transformation which strongly affects the resulting PCs. Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue. In the controversial 15th century period, the MBH98 method effectively selects only one species (bristlecone pine) into the critical North American PC1, making it implausible to describe it as the ``dominant pattern of variance''. Through Monte Carlo analysis, we show that MBH98 benchmarks for significance of the Reduction of Error (RE) statistic are substantially under-stated and, using a range of cross-validation statistics, we show that the MBH98 15th century reconstruction lacks statistical significance.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597
Multilevel sparse functional principal component analysis
Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.
2014-01-01
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597
Principal component analysis for designed experiments
2015-01-01
Background Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. Results The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the
Radar fall detection using principal component analysis
NASA Astrophysics Data System (ADS)
Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem
2016-05-01
Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.
The principal components of response strength.
Killeen, P R; Hall, S S
2001-01-01
As Skinner (1938) described it, response strength is the "state of the reflex with respect to all its static properties" (p. 15), which include response rate, latency, probability, and persistence. The relations of those measures to one another was analyzed by probabilistically reinforcing, satiating, and extinguishing pigeons' key pecking in a trials paradigm. Reinforcement was scheduled according to variable-interval, variable-ratio, and fixed-interval contingencies. Principal components analysis permitted description in terms of a single latent variable, strength, and this was validated with confirmatory factor analyses. Overall response rate was an excellent predictor of this state variable. PMID:11394483
BBH Classification Using Principal Component Analysis
NASA Astrophysics Data System (ADS)
Shoemaker, Deirdre; Cadonati, Laura; Clark, James; Day, Brian; Jeng, Ik Siong; Lombardi, Alexander; London, Lionel; Mangini, Nicholas; Logue, Josh
2015-04-01
Binary black holes will inspiral, merge and ringdown in the LIGO/VIRGO band for an interesting range of total masses. We present an update on our approach of using Principal Component Analysis to build models of NR BBH waveforms that focus on the merger for generic BBH signals. These models are intended to be used to conduct coarse parameter estimation for gravitational wave burst candidate events. The proposed benefit is a fast, optimized catalog that classifies bulk features in the signal. NSFPHY-0955773, 0955825, SUPA and STFC UK. Simulations by NSF XSEDE PHY120016 and PHY090030.
The principal components of response strength.
Killeen, P R; Hall, S S
2001-03-01
As Skinner (1938) described it, response strength is the "state of the reflex with respect to all its static properties" (p. 15), which include response rate, latency, probability, and persistence. The relations of those measures to one another was analyzed by probabilistically reinforcing, satiating, and extinguishing pigeons' key pecking in a trials paradigm. Reinforcement was scheduled according to variable-interval, variable-ratio, and fixed-interval contingencies. Principal components analysis permitted description in terms of a single latent variable, strength, and this was validated with confirmatory factor analyses. Overall response rate was an excellent predictor of this state variable. PMID:11394483
Principal components analysis of Jupiter VIMS spectra
Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, C.; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.
2004-01-01
During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.
Sparse principal component analysis in cancer research
Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa
2015-01-01
A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835
Fault detection with principal component pursuit method
NASA Astrophysics Data System (ADS)
Pan, Yijun; Yang, Chunjie; Sun, Youxian; An, Ruqiao; Wang, Lin
2015-11-01
Data-driven approaches are widely applied for fault detection in industrial process. Recently, a new method for fault detection called principal component pursuit(PCP) is introduced. PCP is not only robust to outliers, but also can accomplish the objectives of model building, fault detection, fault isolation and process reconstruction simultaneously. PCP divides the data matrix into two parts: a fault-free low rank matrix and a sparse matrix with sensor noise and process fault. The statistics presented in this paper fully utilize the information in data matrix. Since the low rank matrix in PCP is similar to principal components matrix in PCA, a T2 statistic is proposed for fault detection in low rank matrix. And this statistic can illustrate that PCP is more sensitive to small variations in variables than PCA. In addition, in sparse matrix, a new monitored statistic performing the online fault detection with PCP-based method is introduced. This statistic uses the mean and the correlation coefficient of variables. Monte Carlo simulation and Tennessee Eastman (TE) benchmark process are provided to illustrate the effectiveness of monitored statistics.
Pierce, Karisa M.; Hope, Janiece L.; Johnson, Kevin J.; Wright, Bob W.; Synovec, Robert E.
2005-11-25
A fast and objective chemometric classification method is developed and applied to the analysis of gas chromatography (GC) data from five commercial gasoline samples. The gasoline samples serve as model mixtures, whereas the focus is on the development and demonstration of the classification method. The method is based on objective retention time alignment (referred to as piecewise alignment) coupled with analysis of variance (ANOVA) feature selection prior to classification by principal component analysis (PCA) using optimal parameters. The degree-of-class-separation is used as a metric to objectively optimize the alignment and feature selection parameters using a suitable training set thereby reducing user subjectivity, as well as to indicate the success of the PCA clustering and classification. The degree-of-class-separation is calculated using Euclidean distances between the PCA scores of a subset of the replicate runs from two of the five fuel types, i.e., the training set. The unaligned training set that was directly submitted to PCA had a low degree-of-class-separation (0.4), and the PCA scores plot for the raw training set combined with the raw test set failed to correctly cluster the five sample types. After submitting the training set to piecewise alignment, the degree-of-class-separation increased (1.2), but when the same alignment parameters were applied to the training set combined with the test set, the scores plot clustering still did not yield five distinct groups. Applying feature selection to the unaligned training set increased the degree-of-class-separation (4.8), but chemical variations were still obscured by retention time variation and when the same feature selection conditions were used for the training set combined with the test set, only one of the five fuels was clustered correctly. However, piecewise alignment coupled with feature selection yielded a reasonably optimal degree-of-class-separation for the training set (9.2), and when the
Investigating dark energy experiments with principal components
Crittenden, Robert G.; Zhao, Gong-Bo; Pogosian, Levon E-mail: levon@sfu.ca
2009-12-01
We use a principal component approach to contrast different kinds of probes of dark energy, and to emphasize how an array of probes can work together to constrain an arbitrary equation of state history w(z). We pay particular attention to the role of the priors in assessing the information content of experiments and propose using an explicit prior on the degree of smoothness of w(z) that is independent of the binning scheme. We also show how a figure of merit based on the mean squared error probes the number of new modes constrained by a data set, and use it to examine how informative various experiments will be in constraining the evolution of dark energy.
Principal Component Analysis of Thermographic Data
NASA Technical Reports Server (NTRS)
Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.
2015-01-01
Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.
EP component identification and measurement by principal components analysis.
Chapman, R M; McCrary, J W
1995-04-01
Between the acquisition of Evoked Potential (EP) data and their interpretation lies a major problem: What to measure? An approach to this kind of problem is outlined here in terms of Principal Components Analysis (PCA). An important second theme is that experimental manipulation is important to functional interpretation. It would be desirable to have a system of EP measurement with the following characteristics: (1) represent the data in a concise, parsimonous way; (2) determine EP components from the data without assuming in advance any particular waveforms for the components; (3) extract components which are independent of each other; (4) measure the amounts (contributions) of various components in observed EPs; (5) use measures that have greater reliability than measures at any single time point or peak; and (6) identify and measure components that overlap in time. PCA has these desirable characteristics. Simulations are illustrated. PCA's beauty also has some warts that are discussed. In addition to discussing the usual two-mode model of PCA, an extension of PCA to a three-mode model is described that provides separate parameters for (1) waveforms over time, (2) coefficients for spatial distribution, and (3) scores telling the amount of each component in each EP. PCA is compared with more traditional approaches. Some biophysical considerations are briefly discussed. Choices to be made in applying PCA are considered. Other issues include misallocation of variance, overlapping components, validation, and latency changes. PMID:7626278
Spatially Weighted Principal Component Analysis for Imaging Classification
Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu
2014-01-01
The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629
GPR anomaly detection with robust principal component analysis
NASA Astrophysics Data System (ADS)
Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Kelly, Jack; Havens, Timothy C.
2015-05-01
This paper investigates the application of Robust Principal Component Analysis (RPCA) to ground penetrating radar as a means to improve GPR anomaly detection. The method consists of a preprocessing routine to smoothly align the ground and remove the ground response (haircut), followed by mapping to the frequency domain, applying RPCA, and then mapping the sparse component of the RPCA decomposition back to the time domain. A prescreener is then applied to the time-domain sparse component to perform anomaly detection. The emphasis of the RPCA algorithm on sparsity has the effect of significantly increasing the apparent signal-to-clutter ratio (SCR) as compared to the original data, thereby enabling improved anomaly detection. This method is compared to detrending (spatial-mean removal) and classical principal component analysis (PCA), and the RPCA-based processing is seen to provide substantial improvements in the apparent SCR over both of these alternative processing schemes. In particular, the algorithm has been applied to both field collected impulse GPR data and has shown significant improvement in terms of the ROC curve relative to detrending and PCA.
Principal Components Analysis Studies of Martian Clouds
NASA Astrophysics Data System (ADS)
Klassen, D. R.; Bell, J. F., III
2001-11-01
We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with
Principal component analysis of scintimammographic images.
Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto
2006-01-01
The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004
Principal Components Analysis of Population Admixture
Ma, Jianzhong; Amos, Christopher I.
2012-01-01
With the availability of high-density genotype information, principal components analysis (PCA) is now routinely used to detect and quantify the genetic structure of populations in both population genetics and genetic epidemiology. An important issue is how to make appropriate and correct inferences about population relationships from the results of PCA, especially when admixed individuals are included in the analysis. We extend our recently developed theoretical formulation of PCA to allow for admixed populations. Because the sampled individuals are treated as features, our generalized formulation of PCA directly relates the pattern of the scatter plot of the top eigenvectors to the admixture proportions and parameters reflecting the population relationships, and thus can provide valuable guidance on how to properly interpret the results of PCA in practice. Using our formulation, we theoretically justify the diagnostic of two-way admixture. More importantly, our theoretical investigations based on the proposed formulation yield a diagnostic of multi-way admixture. For instance, we found that admixed individuals with three parental populations are distributed inside the triangle formed by their parental populations and divide the triangle into three smaller triangles whose areas have the same proportions in the big triangle as the corresponding admixture proportions. We tested and illustrated these findings using simulated data and data from HapMap III and the Human Genome Diversity Project. PMID:22808102
Point-process principal components analysis via geometric optimization.
Solo, Victor; Pasha, Syed Ahmed
2013-01-01
There has been a fast-growing demand for analysis tools for multivariate point-process data driven by work in neural coding and, more recently, high-frequency finance. Here we develop a true or exact (as opposed to one based on time binning) principal components analysis for preliminary processing of multivariate point processes. We provide a maximum likelihood estimator, an algorithm for maximization involving steepest ascent on two Stiefel manifolds, and novel constrained asymptotic analysis. The method is illustrated with a simulation and compared with a binning approach. PMID:23020106
Technology Transfer Automated Retrieval System (TEKTRAN)
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Principal components of CMB non-Gaussianity
NASA Astrophysics Data System (ADS)
Regan, Donough; Munshi, Dipak
2015-04-01
The skew-spectrum statistic introduced by Munshi & Heavens has recently been used in studies of non-Gaussianity from diverse cosmological data sets including the detection of primary and secondary non-Gaussianity of cosmic microwave background (CMB) radiation. Extending previous work, focused on independent estimation, here we deal with the question of joint estimation of multiple skew-spectra from the same or correlated data sets. We consider the optimum skew-spectra for various models of primordial non-Gaussianity as well as secondary bispectra that originate from the cross-correlation of secondaries and lensing of CMB: coupling of lensing with the Integrated Sachs-Wolfe effect, coupling of lensing with thermal Sunyaev-Zeldovich, as well as from unresolved point sources. For joint estimation of various types of non-Gaussianity, we use the principal component analysis (PCA) to construct the linear combinations of amplitudes of various models of non-Gaussianity, e.g. f^loc_NL,f^eq_NL,f^ortho_NL that can be estimated from CMB maps. We describe how the bias induced in the estimation of primordial non-Gaussianity due to secondary non-Gaussianity may be evaluated for arbitrary primordial models using a PCA analysis. The PCA approach allows one to infer approximate (but generally accurate) constraints using CMB data sets on any reasonably smooth model by use of a look-up table and performing a simple computation. This principle is validated by computing constraints on the Dirac-Born-Infeld bispectrum using a PCA analysis of the standard templates.
Application of principal component analysis in phase-shifting photoelasticity.
Quiroga, Juan A; Gómez-Pedrero, José A
2016-03-21
Principal component analysis phase shifting (PCA) is a useful tool for fringe pattern demodulation in phase shifting interferometry. The PCA has no restrictions on background intensity or fringe modulation, and it is a self-calibrating phase sampling algorithm (PSA). Moreover, the technique is well suited for analyzing arbitrary sets of phase-shifted interferograms due to its low computational cost. In this work, we have adapted the standard phase shifting algorithm based on the PCA to the particular case of photoelastic fringe patterns. Compared with conventional PSAs used in photoelasticity, the PCA method does not need calibrated phase steps and, given that it can deal with an arbitrary number of images, it presents good noise rejection properties, even for complicated cases such as low order isochromatic photoelastic patterns. PMID:27136792
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Level-1C Product from AIRS: Principal Component Filtering
NASA Technical Reports Server (NTRS)
Manning, Evan M.; Jiang, Yibo; Aumann, Hartmut H.; Elliott, Denis A.; Hannon, Scott
2012-01-01
The Atmospheric Infrared Sounder (AIRS), launched on the EOS Aqua spacecraft on May 4, 2002, is a grating spectrometer with 2378 channels in the range 3.7 to 15.4 microns. In a grating spectrometer each individual radiance measurement is largely independent of all others. Most measurements are extremely accurate and have very low noise levels. However, some channels exhibit high noise levels or other anomalous behavior, complicating applications needing radiances throughout a band, such as cross-calibration with other instruments and regression retrieval algorithms. The AIRS Level-1C product is similar to Level-1B but with instrument artifacts removed. This paper focuses on the "cleaning" portion of Level-1C, which identifies bad radiance values within spectra and produces substitute radiances using redundant information from other channels. The substitution is done in two passes, first with a simple combination of values from neighboring channels, then with principal components. After results of the substitution are shown, differences between principal component reconstructed values and observed radiances are used to investigate detailed noise characteristics and spatial misalignment in other channels.
NASA Technical Reports Server (NTRS)
Lee, Jae K.; Mausel, Paul W.; Lulla, Kamlesh P.
1989-01-01
Both principal component analysis (PCA) and principal factor analysis (PFA) were used to analyze an experimental multispectral data structure in terms of common and unique variance. Only the common variance of the multispectral data was associated with the principal factor, while higher-order principal components were associated with both common and unique variance. The unique variance was found to represent small spectral variations within each cover type as well as noise vectors, and was most abundant in the lower-order principal components. The lower-order principal components can be useful in research designed to discriminate minor physical variations within features, and to highlight localized change when using multitemporal-multispectral data. Conversely, PFA of the multispectral data provided an insight into a great potential for discriminating basic land-cover types by excluding the unique variance which was related to the noise and minor spectral variations.
Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.
Wang, Yanhua; Ying, Leslie
2014-01-01
Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262
Biological agent detection based on principal component analysis
NASA Astrophysics Data System (ADS)
Mudigonda, Naga R.; Kacelenga, Ray
2006-05-01
This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.
An Introductory Application of Principal Components to Cricket Data
ERIC Educational Resources Information Center
Manage, Ananda B. W.; Scariano, Stephen M.
2013-01-01
Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…
A principal component analysis of transmission spectra of wine distillates
NASA Astrophysics Data System (ADS)
Rogovaya, M. V.; Sinitsyn, G. V.; Khodasevich, M. A.
2014-11-01
A chemometric method of decomposing multidimensional data into a small-sized space, the principal component method, has been applied to the transmission spectra of vintage Moldovan wine distillates. A sample of 42 distillates aged from four to 7 years from six producers has been used to show the possibility of identifying a producer in a two-dimensional space of principal components describing 94.5% of the data-matrix dispersion. Analysis of the loads into the first two principal components has shown that, in order to measure the optical characteristics of the samples under study using only two wavelengths, it is necessary to select 380 and 540 nm, instead of the standard 420 and 520 nm, to describe the variability of the distillates by one principal component or 370 and 520 nm to describe the variability by two principal components.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION
The mathematics behind the techniques of principal component analysis and partial least squares regression is presented in detail, starting from the appropriate extreme conditions. he meaning of the resultant vectors and many of their mathematical interrelationships are also pres...
Physical parameter effects on radar backscatter using principal component analysis
NASA Astrophysics Data System (ADS)
Chuah, Hean T.; Teh, K. B.
1994-12-01
This paper contains a sensitivity analysis of the effects of physical parameters on radar backscatter coefficients from a vegetation canopy using the method of principal component analysis. A Monte Carlo forward scattering model is used to generate the necessary data set for such analysis. The vegetation canopy is modeled as a layer of randomly distributed circular disks bounded below by a Kirchhoff rough surface. Data reduction is accomplished by the statistical principal component analysis technique in which only three principal components are found to be sufficient, containing 97% of the information in the original set. The first principal component can be interpreted as volume-volume backscatter, while the second and the third as surface backscatter and surface-volume backscatter, respectively. From the correlation matrix obtained, the sensitivity of radar backscatter due to various physical parameters is investigated. These include wave frequency, moisture content, scatterer's size, volume fraction, ground permittivity and surface roughness.
EXAFS and principal component analysis : a new shell game.
Wasserman, S.
1998-10-28
The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions.
Optimized principal component analysis on coronagraphic images of the fomalhaut system
Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.; Amara, Adam
2014-01-01
We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases the background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.
Principal Components Analysis of a JWST NIRSpec Detector Subsystem
NASA Technical Reports Server (NTRS)
Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos
2013-01-01
We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements
Natural movement generation using hidden Markov models and principal components.
Kwon, Junghyun; Park, Frank C
2008-10-01
Recent studies have shown that the perception of natural movements-in the sense of being "humanlike"-depends on both joint and task space characteristics of the movement. This paper proposes a movement generation framework that merges two established techniques from gesture recognition and motion generation-hidden Markov models (HMMs) and principal components-into an efficient and reliable means of generating natural movements, which uniformly considers joint and task space characteristics. Given human motion data that are classified into several movement categories, for each category, the principal components extracted from the joint trajectories are used as basis elements. An HMM is, in turn, designed and trained for each movement class using the human task space motion data. Natural movements are generated as the optimal linear combination of principal components, which yields the highest probability for the trained HMM. Experimental case studies with a prototype humanoid robot demonstrate the various advantages of our proposed framework. PMID:18784005
Application of Principal Component Analysis to EUV multilayer defect printing
NASA Astrophysics Data System (ADS)
Xu, Dongbo; Evanschitzky, Peter; Erdmann, Andreas
2015-09-01
This paper proposes a new method for the characterization of multilayer defects on EUV masks. To reconstruct the defect geometry parameters from the intensity and phase of a defect, the Principal Component Analysis (PCA) is employed to parametrize the intensity and phase distributions into principal component coefficients. In order to construct the base functions of PCA, a combination of a reference multilayer defect and appropriate pupil filters is introduced to obtain the designed sets of intensity and phase distributions. Finally, an Artificial Neural Network (ANN) is applied to correlate the principal component coefficients of the intensity and the phase of the defect with the defect geometry parameters and to reconstruct the unknown defect geometry parameters.
UNIPALS: SOFTWARE FOR PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION
Software for the analysis of multivariate chemical data by principal components and partial least squares methods is included on disk. he methods extract latent variables from the chemical data with the UNIversal PArtial Least Squares or UNIPALS algorithm. he software is written ...
Applications of Nonlinear Principal Components Analysis to Behavioral Data.
ERIC Educational Resources Information Center
Hicks, Marilyn Maginley
1981-01-01
An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)
Xiao, Jinjun; Li, Min; Zhang, Haipeng
2015-01-01
This paper proposes a novel robust adaptive principal component analysis (RAPCA) method based on intergraph matrix for image registration in order to improve robustness and real-time performance. The contributions can be divided into three parts. Firstly, a novel RAPCA method is developed to capture the common structure patterns based on intergraph matrix of the objects. Secondly, the robust similarity measure is proposed based on adaptive principal component. Finally, the robust registration algorithm is derived based on the RAPCA. The experimental results show that the proposed method is very effective in capturing the common structure patterns for image registration on real-world images. PMID:25960739
Removing Milky Way from airglow images using principal component analysis
NASA Astrophysics Data System (ADS)
Li, Zhenhua; Liu, Alan; Sivjee, Gulamabas G.
2014-04-01
Airglow imaging is an effective way to obtain atmospheric gravity wave information in the airglow layers in the upper mesosphere and the lower thermosphere. Airglow images are often contaminated by the Milky Way emission. To extract gravity wave parameters correctly, the Milky Way must be removed. The paper demonstrates that principal component analysis (PCA) can effectively represent the dominant variation patterns of the intensity of airglow images that are associated with the slow moving Milky Way features. Subtracting this PCA reconstructed field reveals gravity waves that are otherwise overwhelmed by the strong spurious waves associated with the Milky Way. Numerical experiments show that nonstationary gravity waves with typical wave amplitudes and persistences are not affected by the PCA removal because the variances contributed by each wave event are much smaller than the ones in the principal components.
Principal component analysis: a review and recent developments.
Jolliffe, Ian T; Cadima, Jorge
2016-04-13
Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application. PMID:26953178
Applications Of Nonlinear Principal Components Analysis To Behavioral Data.
Hicks, M M
1981-07-01
A quadratic function was derived from variables believed to be nonlinearly related. The method was suggested by Gnanadesikan (1977) and based on an early paper of Karl Pearson (1901) (which gave rise to principal components), in which Pearson demonstrated that a plane of best fit to a system of points could be elicited from the elements of the eigenvector associated with the smallest eigenvalue of the covariance matrix. PMID:26815595
Principal Component Analysis of Arctic Solar Irradiance Spectra
NASA Technical Reports Server (NTRS)
Rabbette, Maura; Pilewskie, Peter; Gore, Warren J. (Technical Monitor)
2000-01-01
During the FIRE (First ISCPP Regional Experiment) Arctic Cloud Experiment and coincident SHEBA (Surface Heat Budget of the Arctic Ocean) campaign, detailed moderate resolution solar spectral measurements were made to study the radiative energy budget of the coupled Arctic Ocean - Atmosphere system. The NASA Ames Solar Spectral Flux Radiometers (SSFRs) were deployed on the NASA ER-2 and at the SHEBA ice camp. Using the SSFRs we acquired continuous solar spectral irradiance (380-2200 nm) throughout the atmospheric column. Principal Component Analysis (PCA) was used to characterize the several tens of thousands of retrieved SSFR spectra and to determine the number of independent pieces of information that exist in the visible to near-infrared solar irradiance spectra. It was found in both the upwelling and downwelling cases that almost 100% of the spectral information (irradiance retrieved from 1820 wavelength channels) was contained in the first six extracted principal components. The majority of the variability in the Arctic downwelling solar irradiance spectra was explained by a few fundamental components including infrared absorption, scattering, water vapor and ozone. PCA analysis of the SSFR upwelling Arctic irradiance spectra successfully separated surface ice and snow reflection from overlying cloud into distinct components.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Principal Component Analysis for Enhancement of Infrared Spectra Monitoring
NASA Astrophysics Data System (ADS)
Haney, Ricky Lance
The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air
APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES
STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.
1999-05-22
Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.
Self-aggregation in scaled principal component space
Ding, Chris H.Q.; He, Xiaofeng; Zha, Hongyuan; Simon, Horst D.
2001-10-05
Automatic grouping of voluminous data into meaningful structures is a challenging task frequently encountered in broad areas of science, engineering and information processing. These data clustering tasks are frequently performed in Euclidean space or a subspace chosen from principal component analysis (PCA). Here we describe a space obtained by a nonlinear scaling of PCA in which data objects self-aggregate automatically into clusters. Projection into this space gives sharp distinctions among clusters. Gene expression profiles of cancer tissue subtypes, Web hyperlink structure and Internet newsgroups are analyzed to illustrate interesting properties of the space.
Multivariate concentration determination using principal component regression with residual analysis
Keithley, Richard B.; Heien, Michael L.; Wightman, R. Mark
2009-01-01
Data analysis is an essential tenet of analytical chemistry, extending the possible information obtained from the measurement of chemical phenomena. Chemometric methods have grown considerably in recent years, but their wide use is hindered because some still consider them too complicated. The purpose of this review is to describe a multivariate chemometric method, principal component regression, in a simple manner from the point of view of an analytical chemist, to demonstrate the need for proper quality-control (QC) measures in multivariate analysis and to advocate the use of residuals as a proper QC method. PMID:20160977
Principal Component Analysis of Terrestrial and Venusian Topography
NASA Astrophysics Data System (ADS)
Stoddard, P. R.; Jurdy, D. M.
2015-12-01
We use Principal Component Analysis (PCA) as an objective tool in analyzing, comparing, and contrasting topographic profiles of different/similar features from different locations and planets. To do so, we take average profiles of a set of features and form a cross-correlation matrix, which is then diagonalized to determine its principal components. These components, not merely numbers, represent actual profile shapes that give a quantitative basis for comparing different sets of features. For example, PCA for terrestrial hotspots shows the main component as a generic dome shape. Secondary components show a more sinusoidal shape, related to the lithospheric loading response, and thus give information about the nature of the lithosphere setting of the various hotspots. We examine a range of terrestrial spreading centers: fast, slow, ultra-slow, incipient, and extinct, and compare these to several chasmata on Venus (including Devana, Ganis, Juno, Parga, and Kuanja). For upwelling regions, we consider the oceanic Hawaii, Reunion, and Iceland hotspots and Yellowstone, a prototypical continental hotspot. Venus has approximately one dozen broad topographic and geoid highs called regiones. Our analysis includes Atla, Beta, and W. Eistla regiones. Atla and Beta are widely thought to be the most likely to be currently or recently active. Analysis of terrestrial rifts suggests shows increasing uniformity of shape among rifts with increasing spreading rates. Venus' correlations of uniformity rank considerably lower than the terrestrial ones. Extrapolating the correlation/spreading rate suggests that Venus' chasmata, if analogous to terrestrial spreading centers, most resemble the ultra-slow spreading level (less than 12mm/yr) of the Arctic Gakkel ridge. PCA will provide an objective measurement of this correlation.
Temporal variations in ozone concentrations derived from Principal Component Analysis
NASA Astrophysics Data System (ADS)
Yonemura, S.; Kawashima, S.; Matsueda, H.; Sawa, Y.; Inoue, S.; Tanimoto, H.
2008-03-01
The application of principal components and cluster analysis to vertical ozone concentration profiles in Tsukuba, Japan, has been explored. Average monthly profiles and profiles of the ratio between standard deviation and the absolute ozone concentration (SDPR) of 1 km data were calculated from the original ozone concentration data. Mean (first) and gradient (second) components explained more than 80% of the variation in both the 0-6 km tropospheric and 11-20 km troposphere-stratosphere (interspheric) layers. The principal components analysis not only reproduced the expected inverse relationship between mean ozone concentration and tropopause height ( r 2 = 0.41) and that in the tropospheric layer this is larger in spring and summer, but also yielded new information as follows. The larger gradient component score in summer for the interspheric layer points to the seasonal variation of the troposphere-stratosphere exchange. The minimum SDPR was at about 3 km in the tropospheric layer and the maximum was at about 17 km in the interspheric layer. The tropospheric SDPR mean component score was larger in summer, possibly reflecting the mixing of Pacific maritime air masses with urban air masses. The cluster analysis of the monthly ozone profiles for the 1970s and 2000s revealed different patterns for winter and summer. The month of May was part of the winter pattern in the 1970s but part of the summer pattern during the 2000s. This statistically detected change likely reflects the influence of global warming. Thus, these two statistical analysis techniques can be powerful tools for identifying features of ozone concentration profiles.
Water reuse systems: A review of the principal components
Lucchetti, G.; Gray, G.A.
1988-01-01
Principal components of water reuse systems include ammonia removal, disease control, temperature control, aeration, and particulate filtration. Effective ammonia removal techniques include air stripping, ion exchange, and biofiltration. Selection of a particular technique largely depends on site-specific requirements (e.g., space, existing water quality, and fish densities). Disease control, although often overlooked, is a major problem in reuse systems. Pathogens can be controlled most effectively with ultraviolet radiation, ozone, or chlorine. Simple and inexpensive methods are available to increase oxygen concentration and eliminate gas supersaturation, these include commercial aerators, air injectors, and packed columns. Temperature control is a major advantage of reuse systems, but the equipment required can be expensive, particularly if water temperature must be rigidly controlled and ambient air temperature fluctuates. Filtration can be readily accomplished with a hydrocyclone or sand filter that increases overall system efficiency. Based on criteria of adaptability, efficiency, and reasonable cost, we recommend components for a small water reuse system.
Ice-cloud particle habit classification using principal components
NASA Astrophysics Data System (ADS)
Lindqvist, H.; Muinonen, K.; Nousiainen, T.; Um, J.; McFarquhar, G. M.; Haapanala, P.; Makkonen, R.; Hakkarainen, H.
2012-08-01
A novel automatic classification method is proposed for identifying the habits of large ice-cloud particles and deriving the shape distribution of particle ensembles. This IC-PCA (Ice-crystal Classification with Principal Component Analysis) tool is based on a principal component analysis of selected physical and statistical features of ice-crystal perimeters. The method is developed and tested using image data obtained with a Cloud Particle Imager, but can be applied to other silhouette data as well. For three randomly selected test cases of 222, 200, and 201 crystals from tropical, midlatitude, and arctic ice clouds, the combined classification accuracy of the IC-PCA is 81.1%. Since previous, semiautomatic classification methods are more time-consuming and include a subjective phase, the automatic and objective IC-PCA offers a notable improvement in retrieving the shapes of the individual crystals. As the habit distributions of ice-cloud particles can be applied to computations of radiative impact of cirrus, it is also demonstrated how classification uncertainties propagate into the radiative transfer computations by using the arctic test case as an example. Computations of shortwave radiative fluxes show that the flux differences between clouds of manually and automatically classified crystals can be as large as 10 Wm-2 but also that two manual classifications of the same image data result in even larger differences, implying the need for a systematic and repeatable classification method.
Principal components granulometric analysis of tidally dominated depositional environments
Mitchell, S.W. ); Long, W.T. ); Friedrich, N.E. )
1991-02-01
Sediments often are investigated by using mechanical sieve analysis (at 1/4 or 1/2{phi} intervals) to identify differences in weight-percent distributions between related samples, and thereby, to deduce variations in sediment sources and depositional processes. Similar granulometric data from groups of surface samples from two siliciclastic estuaries and one carbonate tidal creek have been clustered using principal components analysis. Subtle geographic trends in tidally dominated depositional processes and in sediment sources can be inferred from the clusters. In Barnstable Harbor, Cape Cod, Massachusetts, the estuary can be subdivided into five major subenvironments, with tidal current intensities/directions and sediment sources (longshore transport or sediments weathering from the Sandwich Moraine) as controls. In Morro Bay, San Luis Obispo county, California, all major environments (beach, dune, bay, delta, and fluvial) can be easily distinguished; a wide variety of subenvironments can be recognized. On Pigeon Creek, San Salvador Island, Bahamas, twelve subenvironments can be recognized. Biogenic (Halimeda, Peneroplios, mixed skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis skeletal), chemogenic (pelopids, aggregates), and detrital (lithoclastis of eroding Pleistocene limestone) are grain types which dominate. When combined with tidal current intensities/directions, grain sources produce subenvironments distributed parallel to tidal channels. The investigation of the three modern environments indicates that principal components granulometric analysis is potentially a useful tool in recognizing subtle changes in transport processes and sediment sources preserved in ancient depositional sequences.
CMB constraints on principal components of the inflaton potential
Dvorkin, Cora; Hu, Wayne
2010-08-15
We place functional constraints on the shape of the inflaton potential from the cosmic microwave background through a variant of the generalized slow-roll approximation that allows large amplitude, rapidly changing deviations from scale-free conditions. Employing a principal component decomposition of the source function G{sup '{approx_equal}}3(V{sup '}/V){sup 2}-2V{sup ''}/V and keeping only those measured to better than 10% results in 5 nearly independent Gaussian constraints that may be used to test any single-field inflationary model where such deviations are expected. The first component implies <3% variations at the 100 Mpc scale. One component shows a 95% CL preference for deviations around the 300 Mpc scale at the {approx}10% level but the global significance is reduced considering the 5 components examined. This deviation also requires a change in the cold dark matter density which in a flat {Lambda}CDM model is disfavored by current supernova and Hubble constant data and can be tested with future polarization or high multipole temperature data. Its impact resembles a local running of the tilt from multipoles 30-800 but is only marginally consistent with a constant running beyond this range. For this analysis, we have implemented a {approx}40x faster WMAP7 likelihood method which we have made publicly available.
Principal Semantic Components of Language and the Measurement of Meaning
Samsonovic, Alexei V.; Ascoli, Giorgio A.
2010-01-01
Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of “good/bad” (valence), “calm/excited” (arousal), and “open/closed” (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number (∼4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a
Huang, Chuan; Graff, Christian G.; Clarkson, Eric W.; Bilgin, Ali; Altbach, Maria I.
2011-01-01
Recently, there has been an increased interest in quantitative MR parameters to improve diagnosis and treatment. Parameter mapping requires multiple images acquired with different timings usually resulting in long acquisition times. While acquisition time can be reduced by acquiring undersampled data, obtaining accurate estimates of parameters from undersampled data is a challenging problem, in particular for structures with high spatial frequency content. In this work, Principal Component Analysis (PCA) is combined with a model-based algorithm to reconstruct maps of selected principal component coefficients from highly undersampled radial MRI data. This novel approach linearizes the cost function of the optimization problem yielding a more accurate and reliable estimation of MR parameter maps. The proposed algorithm - REconstruction of Principal COmponent coefficient Maps (REPCOM) using Compressed Sensing - is demonstrated in phantoms and in vivo and compared to two other algorithms previously developed for undersampled data. PMID:22190358
Principal Components Analysis of Triaxial Vibration Data From Helicopter Transmissions
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Huff, Edward M.
2001-01-01
Research on the nature of the vibration data collected from helicopter transmissions during flight experiments has led to several crucial observations believed to be responsible for the high rates of false alarms and missed detections in aircraft vibration monitoring systems. This work focuses on one such finding, namely, the need to consider additional sources of information about system vibrations. In this light, helicopter transmission vibration data, collected using triaxial accelerometers, were explored in three different directions, analyzed for content, and then combined using Principal Components Analysis (PCA) to analyze changes in directionality. In this paper, the PCA transformation is applied to 176 test conditions/data sets collected from an OH58C helicopter to derive the overall experiment-wide covariance matrix and its principal eigenvectors. The experiment-wide eigenvectors. are then projected onto the individual test conditions to evaluate changes and similarities in their directionality based on the various experimental factors. The paper will present the foundations of the proposed approach, addressing the question of whether experiment-wide eigenvectors accurately model the vibration modes in individual test conditions. The results will further determine the value of using directionality and triaxial accelerometers for vibration monitoring and anomaly detection.
Inverting geodetic time series with a principal component analysis-based inversion method
NASA Astrophysics Data System (ADS)
Kositsky, A. P.; Avouac, J.-P.
2010-03-01
The Global Positioning System (GPS) system now makes it possible to monitor deformation of the Earth's surface along plate boundaries with unprecedented accuracy. In theory, the spatiotemporal evolution of slip on the plate boundary at depth, associated with either seismic or aseismic slip, can be inferred from these measurements through some inversion procedure based on the theory of dislocations in an elastic half-space. We describe and test a principal component analysis-based inversion method (PCAIM), an inversion strategy that relies on principal component analysis of the surface displacement time series. We prove that the fault slip history can be recovered from the inversion of each principal component. Because PCAIM does not require externally imposed temporal filtering, it can deal with any kind of time variation of fault slip. We test the approach by applying the technique to synthetic geodetic time series to show that a complicated slip history combining coseismic, postseismic, and nonstationary interseismic slip can be retrieved from this approach. PCAIM produces slip models comparable to those obtained from standard inversion techniques with less computational complexity. We also compare an afterslip model derived from the PCAIM inversion of postseismic displacements following the 2005 8.6 Nias earthquake with another solution obtained from the extended network inversion filter (ENIF). We introduce several extensions of the algorithm to allow statistically rigorous integration of multiple data sources (e.g., both GPS and interferometric synthetic aperture radar time series) over multiple timescales. PCAIM can be generalized to any linear inversion algorithm.
Principal component analysis based methodology to distinguish protein SERS spectra
NASA Astrophysics Data System (ADS)
Das, G.; Gentile, F.; Coluccio, M. L.; Perri, A. M.; Nicastri, A.; Mecarini, F.; Cojoc, G.; Candeloro, P.; Liberale, C.; De Angelis, F.; Di Fabrizio, E.
2011-05-01
Surface-enhanced Raman scattering (SERS) substrates were fabricated using electro-plating and e-beam lithography techniques. Nano-structures were obtained comprising regular arrays of gold nanoaggregates with a diameter of 80 nm and a mutual distance between the aggregates (gap) ranging from 10 to 30 nm. The nanopatterned SERS substrate enabled to have better control and reproducibility on the generation of plasmon polaritons (PPs). SERS measurements were performed for various proteins, namely bovine serum albumin (BSA), myoglobin, ferritin, lysozyme, RNase-B, α-casein, α-lactalbumin and trypsin. Principal component analysis (PCA) was used to organize and classify the proteins on the basis of their secondary structure. Cluster analysis proved that the error committed in the classification was of about 14%. In the paper, it was clearly shown that the combined use of SERS measurements and PCA analysis is effective in categorizing the proteins on the basis of secondary structure.
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market. PMID:25095276
Method of Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Functional principal components analysis of workload capacity functions.
Burns, Devin M; Houpt, Joseph W; Townsend, James T; Endres, Michael J
2013-12-01
Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals. PMID:23475829
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Obtaining a linear combination of the principal components of a matrix on quantum computers
NASA Astrophysics Data System (ADS)
Daskin, Ammar
2016-07-01
Principal component analysis is a multivariate statistical method frequently used in science and engineering to reduce the dimension of a problem or extract the most significant features from a dataset. In this paper, using a similar notion to the quantum counting, we show how to apply the amplitude amplification together with the phase estimation algorithm to an operator in order to procure the eigenvectors of the operator associated to the eigenvalues defined in the range [ a, b] , where a and b are real and 0 ≤ a ≤ b ≤ 1 . This makes possible to obtain a combination of the eigenvectors associated with the largest eigenvalues and so can be used to do principal component analysis on quantum computers.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition. PMID:27386281
Zhang, Li-qing; Wu, Xiao-hua; Tang, Xi; Zhu, Xian-liang; Su, Wen-ting
2002-06-01
Principal component regression (PCR) method is used to analyse five components: acetaminophen, p-aminophenol, caffeine, chlorphenamine maleate and guaifenesin. The basic principle and the analytical step of the approach are described in detail. The computer program of LHG is based on VB language. The experimental result shows that the PCR method has no systematical error as compared to classical method. The experimental result shows that the average recovery of each component is all in the range from 96.43% to 107.14%. Each component obtains satisfactory result without any pre-separation. The approach is simple, rapid and suitable for the computer-aid analysis. PMID:12938324
Revisiting AVHRR tropospheric aerosol trends using principal component analysis
NASA Astrophysics Data System (ADS)
Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.
2014-03-01
The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.
Demixed principal component analysis of neural population data
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-01-01
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378
Derivation of Boundary Manikins: A Principal Component Analysis
NASA Technical Reports Server (NTRS)
Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar
2008-01-01
When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.
Revisiting AVHRR Tropospheric Aerosol Trends Using Principal Component Analysis
NASA Technical Reports Server (NTRS)
Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.
2014-01-01
The advanced very high resolution radiometer (AVHRR) satellite instruments provide a nearly 25 year continuous record of global aerosol properties over the ocean. It offers valuable insights into the long-term change in global aerosol loading. However, the AVHRR data record is heavily influenced by two volcanic eruptions, El Chichon on March 1982 and Mount Pinatubo on June 1991. The gradual decay of volcanic aerosols may last years after the eruption, which potentially masks the estimation of aerosol trends in the lower troposphere, especially those of anthropogenic origin. In this study, we show that a principal component analysis approach effectively captures the bulk of the spatial and temporal variability of volcanic aerosols into a single mode. The spatial pattern and time series of this mode provide a good match to the global distribution and decay of volcanic aerosols. We further reconstruct the data set by removing the volcanic aerosol component and reestimate the global and regional aerosol trends. Globally, the reconstructed data set reveals an increase of aerosol optical depth from 1985 to 1990 and decreasing trend from 1994 to 2006. Regionally, in the 1980s, positive trends are observed over the North Atlantic and North Arabian Sea, while negative tendencies are present off the West African coast and North Pacific. During the 1994 to 2006 period, the Gulf of Mexico, North Atlantic close to Europe, and North Africa exhibit negative trends, while the coastal regions of East and South Asia, the Sahel region, and South America show positive trends.
Principal Component Analysis for pattern recognition in volcano seismic spectra
NASA Astrophysics Data System (ADS)
Unglert, Katharina; Jellinek, A. Mark
2016-04-01
Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.
The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)
Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin
2014-01-01
Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728
Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin
2014-01-01
Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.
2009-01-01
This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.
Software Management Environment (SME): Components and algorithms
NASA Technical Reports Server (NTRS)
Hendrick, Robert; Kistler, David; Valett, Jon
1994-01-01
This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'
Principal components analysis of Mars in the near-infrared
NASA Astrophysics Data System (ADS)
Klassen, David R.
2009-11-01
Principal components analysis and target transformation are applied to near-infrared image cubes of Mars in a study to disentangle the spectra into a small number of spectral endmembers and characterize the spectral information. The image cubes are ground-based telescopic data from the NASA Infrared Telescope Facility during the 1995 and 1999 near-aphelion oppositions when ice clouds were plentiful [ Clancy, R.T., Grossman, A.W., Wolff, M.J., James, P.B., Rudy, D.J., Billawala, Y.N., Sandor, B.J., Lee, S.W., Muhleman, D.O., 1996. Icarus 122, 36-62; Wolff, M.J., Clancy, R.T., Whitney, B.A., Christensen, P.R., Pearl, J.C., 1999b. In: The Fifth International Conference on Mars, July 19-24, 1999, Pasadena, CA, pp. 6173], and the 2003 near-perihelion opposition when ice clouds are generally limited to topographically high regions (volcano cap clouds) but airborne dust is more common [ Martin, L.J., Zurek, R.W., 1993. J. Geophys. Res. 98 (E2), 3221-3246]. The heart of the technique is to transform the data into a vector space along the dimensions of greatest spectral variance and then choose endmembers based on these new "trait" dimensions. This is done through a target transformation technique, comparing linear combinations of the principal components to a mineral spectral library. In general Mars can be modeled, on the whole, with only three spectral endmembers which account for almost 99% of the data variance. This is similar to results in the thermal infrared with Mars Global Surveyor Thermal Emission Spectrometer data [Bandfield, J.L., Hamilton, V.E., Christensen, P.R., 2000. Science 287, 1626-1630]. The globally recovered surface endmembers can be used as inputs to radiative transfer modeling in order to measure ice abundance in martian clouds [Klassen, D.R., Bell III, J.F., 2002. Bull. Am. Astron. Soc. 34, 865] and a preliminary test of this technique is also presented.
Technology Transfer Automated Retrieval System (TEKTRAN)
Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (P...
Quince (Cydonia oblonga miller) fruit characterization using principal component analysis.
Silva, Branca M; Andrade, Paula B; Martins, Rui C; Valentão, Patrícia; Ferreres, Federico; Seabra, Rosa M; Ferreira, Margarida A
2005-01-12
This paper presents a large amount of data on the composition of quince fruit with regard to phenolic compounds, organic acids, and free amino acids. Subsequently, principal component analysis (PCA) is carried out to characterize this fruit. The main purposes of this study were (i) the clarification of the interactions among three factors-quince fruit part, geographical origin of the fruits, and harvesting year-and the phenolic, organic acid, and free amino acid profiles; (ii) the classification of the possible differences; and (iii) the possible correlation among the contents of phenolics, organic acids, and free amino acids in quince fruit. With these aims, quince pulp and peel from nine geographical origins of Portugal, harvested in three consecutive years, for a total of 48 samples, were studied. PCA was performed to assess the relationship among the different components of quince fruit phenolics, organic acids, and free amino acids. Phenolics determination was the most interesting. The difference between pulp and peel phenolic profiles was more apparent during PCA. Two PCs accounted for 81.29% of the total variability, PC1 (74.14%) and PC2 (7.15%). PC1 described the difference between the contents of caffeoylquinic acids (3-O-, 4-O-, and 5-O-caffeoylquinic acids and 3,5-O-dicaffeoylquinic acid) and flavonoids (quercetin 3-galactoside, rutin, kaempferol glycoside, kaempferol 3-glucoside, kaempferol 3-rutinoside, quercetin glycosides acylated with p-coumaric acid, and kaempferol glycosides acylated with p-coumaric acid). PC2 related the content of 4-O-caffeoylquinic acid with the contents of 5-O-caffeoylquinic and 3,5-O-dicaffeoylquinic acids. PCA of phenolic compounds enables a clear distinction between the two parts of the fruit. The data presented herein may serve as a database for the detection of adulteration in quince derivatives. PMID:15631517
PRINCIPAL COMPONENT ANALYSIS OF SLOAN DIGITAL SKY SURVEY STELLAR SPECTRA
McGurk, Rosalie C.; Kimball, Amy E.; Ivezic, Zeljko
2010-03-15
We apply Principal Component Analysis (PCA) to {approx}100,000 stellar spectra obtained by the Sloan Digital Sky Survey (SDSS). In order to avoid strong nonlinear variation of spectra with effective temperature, the sample is binned into 0.02 mag wide intervals of the g - r color (-0.20 < g - r < 0.90, roughly corresponding to MK spectral types A3-K3), and PCA is applied independently for each bin. In each color bin, the first four eigenspectra are sufficient to describe the observed spectra within the measurement noise. We discuss correlations of eigencoefficients with metallicity and gravity estimated by the Sloan Extension for Galactic Understanding and Exploration Stellar Parameters Pipeline. The resulting high signal-to-noise mean spectra and the other three eigenspectra are made publicly available. These data can be used to generate high-quality spectra for an arbitrary combination of effective temperature, metallicity, and gravity within the parameter space probed by the SDSS. The SDSS stellar spectroscopic database and the PCA results presented here offer a convenient method to classify new spectra, to search for unusual spectra, to train various spectral classification methods, and to synthesize accurate colors in arbitrary optical bandpasses.
Principal Component Analysis Studies of Turbulence in Optically Thick Gas
NASA Astrophysics Data System (ADS)
Correia, C.; Lazarian, A.; Burkhart, B.; Pogosyan, D.; De Medeiros, J. R.
2016-02-01
In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position-position-velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal -3 spectrum in accordance with the predictions of the Lazarian & Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.
Spatially Weighted Principal Component Regression for High-dimensional Prediction
Shen, Dan; Zhu, Hongtu
2015-01-01
We consider the problem of using high dimensional data residing on graphs to predict a low-dimensional outcome variable, such as disease status. Examples of data include time series and genetic data measured on linear graphs and imaging data measured on triangulated graphs (or lattices), among many others. Many of these data have two key features including spatial smoothness and intrinsically low dimensional structure. We propose a simple solution based on a general statistical framework, called spatially weighted principal component regression (SWPCR). In SWPCR, we introduce two sets of weights including importance score weights for the selection of individual features at each node and spatial weights for the incorporation of the neighboring pattern on the graph. We integrate the importance score weights with the spatial weights in order to recover the low dimensional structure of high dimensional data. We demonstrate the utility of our methods through extensive simulations and a real data analysis based on Alzheimer’s disease neuroimaging initiative data. PMID:26213452
Principal component analysis for LISA: The time delay interferometry connection
Romano, J.D.; Woan, G.
2006-05-15
Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We present a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.
FPGA-based real-time blind source separation with principal component analysis
NASA Astrophysics Data System (ADS)
Wilson, Matthew; Meyer-Baese, Uwe
2015-05-01
Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.
Visualization of learning in multilayer perceptron networks using principal component analysis.
Gallagher, M; Downs, T
2003-01-01
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as backpropagation and can also be used to provide insight into the learning process and the nature of the error surface. PMID:18238154
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Hong, Xiaojian; Hong, Xuezhi; He, Sailing
2015-09-01
A low-complexity optical phase noise suppression approach based on recursive principal components elimination, R-PCE, is proposed and theoretically derived for CO-OFDM systems. Through frequency domain principal components estimation and elimination, signal distortion caused by optical phase noise is mitigated by R-PCE. Since matrix inversion and domain transformation are completely avoided, compared with the case of the orthogonal basis expansion algorithm (L = 3) that offers a similar laser linewidth tolerance, the computational complexities of multiple principal components estimation are drastically reduced in the R-PCE by factors of about 7 and 5 for q = 3 and 4, respectively. The feasibility of optical phase noise suppression with the R-PCE and its decision-aided version (DA-R-PCE) in the QPSK/16QAM CO-OFDM system are demonstrated by Monte-Carlo simulations, which verify that R-PCE with only a few number of principal components q ( = 3) provides a significantly larger laser linewidth tolerance than conventional algorithms, including the common phase error compensation algorithm and linear interpolation algorithm. Numerical results show that the optimal performance of R-PCE and DA-R-PCE can be achieved with a moderate q, which is beneficial for low-complexity hardware implementation. PMID:26368499
Inverse spatial principal component analysis for geophysical survey data interpolation
NASA Astrophysics Data System (ADS)
Li, Qingmou; Dehler, Sonya A.
2015-04-01
The starting point for data processing, visualization, and overlay with other data sources in geological applications often involves building a regular grid by interpolation of geophysical measurements. Typically, the sampling interval along survey lines is much higher than the spacing between survey lines because the geophysical recording system is able to operate with a high sampling rate, while the costs and slower speeds associated with operational platforms limit line spacing. However, currently available interpolating methods often smooth data observed with higher sampling rate along a survey line to accommodate the lower spacing across lines, and much of the higher resolution information is not captured in the interpolation process. In this approach, a method termed as the inverse spatial principal component analysis (isPCA) is developed to address this problem. In the isPCA method, a whole profile observation as well as its line position is handled as an entity and a survey collection of line entities is analyzed for interpolation. To test its performance, the developed isPCA method is used to process a simulated airborne magnetic survey from an existing magnetic grid offshore the Atlantic coast of Canada. The interpolation results using the isPCA method and other methods are compared with the original survey grid. It is demonstrated that the isPCA method outperforms the Inverse Distance Weighting (IDW), Kriging (Geostatistical), and MINimum Curvature (MINC) interpolation methods in retaining detailed anomaly structures and restoring original values. In a second test, a high resolution magnetic survey offshore Cape Breton, Nova Scotia, Canada, was processed and the results are compared with other geological information. This example demonstrates the effective performance of the isPCA method in basin structure identification.
RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.
Dang, Chinh; Radha, Hayder
2015-11-01
Key frame extraction algorithms consider the problem of selecting a subset of the most informative frames from a video to summarize its content. Several applications, such as video summarization, search, indexing, and prints from video, can benefit from extracted key frames of the video under consideration. Most approaches in this class of algorithms work directly with the input video data set, without considering the underlying low-rank structure of the data set. Other algorithms exploit the low-rank component only, ignoring the other key information in the video. In this paper, a novel key frame extraction framework based on robust principal component analysis (RPCA) is proposed. Furthermore, we target the challenging application of extracting key frames from unstructured consumer videos. The proposed framework is motivated by the observation that the RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each element in the same data set. The two information types are combined into a single l1-norm-based non-convex optimization problem to extract the desired number of key frames. Moreover, we develop a novel iterative algorithm to solve this optimization problem. The proposed RPCA-based framework does not require shot(s) detection, segmentation, or semantic understanding of the underlying video. Finally, experiments are performed on a variety of consumer and other types of videos. A comparison of the results obtained by our method with the ground truth and with related state-of-the-art algorithms clearly illustrates the viability of the proposed RPCA-based framework. PMID:26087486
Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W L
2013-10-01
Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection. PMID:24182145
NASA Astrophysics Data System (ADS)
García-Allende, P. B.; Conde, O. M.; Mirapeix, J.; Cubillas, A. M.; López-Higuera, J. M.
2007-07-01
A data processing method for hyperspectral images is presented. Each image contains the whole diffuse reflectance spectra of the analyzed material for all the spatial positions along a specific line of vision. This data processing method is composed of two blocks: data compression and classification unit. Data compression is performed by means of Principal Component Analysis (PCA) and the spectral interpretation algorithm for classification is the Spectral Angle Mapper (SAM). This strategy of classification applying PCA and SAM has been successfully tested on the raw material on-line characterization in the tobacco industry. In this application case the desired raw material (tobacco leaves) should be discriminated from other unwanted spurious materials, such as plastic, cardboard, leather, candy paper, etc. Hyperspectral images are recorded by a spectroscopic sensor consisting of a monochromatic camera and a passive Prism- Grating-Prism device. Performance results are compared with a spectral interpretation algorithm based on Artificial Neural Networks (ANN).
NASA Astrophysics Data System (ADS)
Comber, Alexis J.; Harris, Paul; Tsutsumida, Narumasa
2016-09-01
This study demonstrates the use of a geographically weighted principal components analysis (GWPCA) of remote sensing imagery to improve land cover classification accuracy. A principal components analysis (PCA) is commonly applied in remote sensing but generates global, spatially-invariant results. GWPCA is a local adaptation of PCA that locally transforms the image data, and in doing so, can describe spatial change in the structure of the multi-band imagery, thus directly reflecting that many landscape processes are spatially heterogenic. In this research the GWPCA localised loadings of MODIS data are used as textural inputs, along with GWPCA localised ranked scores and the image bands themselves to three supervised classification algorithms. Using a reference data set for land cover to the west of Jakarta, Indonesia the classification procedure was assessed via training and validation data splits of 80/20, repeated 100 times. For each classification algorithm, the inclusion of the GWPCA loadings data was found to significantly improve classification accuracy. Further, but more moderate improvements in accuracy were found by additionally including GWPCA ranked scores as textural inputs, data that provide information on spatial anomalies in the imagery. The critical importance of considering both spatial structure and spatial anomalies of the imagery in the classification is discussed, together with the transferability of the new method to other studies. Research topics for method refinement are also suggested.
Hurricane properties by principal component analysis of Doppler radar data
NASA Astrophysics Data System (ADS)
Harasti, Paul Robert
A novel approach based on Principal Component Analysis (PCA) of Doppler radar data establishes hurricane properties such as the positions of the circulation centre and wind maxima. The method was developed in conjunction with a new Doppler radar wind model for both mature and weak hurricanes. The tangential wind (Vt) is modeled according to Vtζx = constant, where ζ is the radius, and x is an exponent. The maximum Vt occurs at the Radius of Maximum Wind (RMW). For the mature (weak) hurricane case, x = 1 ( x < 1) within the RMW, and x = 0.5 ( x = 0) beyond the RMW. The radial wind is modeled in a similar fashion in the radial direction with up to two transition radii but it is also varied linearly in the vertical direction. This is the first Doppler radar wind model to account for the vertical variations in the radial wind. The new method employs an S2-mode PCA on the Doppler velocity data taken from a single PPI scan and arranged sequentially in a matrix according to their azimuth and range coordinates. The first two eigenvectors of both the range and azimuth eigenspaces represent over 95% of the total variance in the modeled data; one eigenvector from each pair is analyzed separately to estimate particular hurricane properties. These include the bearing and range to the hurricane's circulation centre, the RMW, and the transition radii of the radial wind. Model results suggest that greater accuracy is achievable and fewer restrictions apply in comparison to other methods. The PCA method was tested on the Doppler velocity data of Hurricane Erin (1995) and Typhoon Alex (1987). In both cases, the similarity of the eigenvectors to their theoretical counterparts was striking even in the presence of significant missing data. Results from Hurricane Erin were in agreement with concurrent aircraft observations of the wind centre corrected for the storm motion. Such information was not available for Typhoon Alex, however, the results agreed with those from other methods
A principal component analysis approach to correcting the knee flexion axis during gait.
Jensen, Elisabeth; Lugade, Vipul; Crenshaw, Jeremy; Miller, Emily; Kaufman, Kenton
2016-06-14
Accurate and precise knee flexion axis identification is critical for prescribing and assessing tibial and femoral derotation osteotomies, but is highly prone to marker misplacement-induced error. The purpose of this study was to develop an efficient algorithm for post-hoc correction of the knee flexion axis and test its efficacy relative to other established algorithms. Gait data were collected on twelve healthy subjects using standard marker placement as well as intentionally misplaced lateral knee markers. The efficacy of the algorithm was assessed by quantifying the reduction in knee angle errors. Crosstalk error was quantified from the coefficient of determination (r(2)) between knee flexion and adduction angles. Mean rotation offset error (αo) was quantified from the knee and hip rotation kinematics across the gait cycle. The principal component analysis (PCA)-based algorithm significantly reduced r(2) (p<0.001) and caused αo,knee to converge toward 11.9±8.0° of external rotation, demonstrating improved certainty of the knee kinematics. The within-subject standard deviation of αo,hip between marker placements was reduced from 13.5±1.5° to 0.7±0.2° (p<0.001), demonstrating improved precision of the knee kinematics. The PCA-based algorithm performed at levels comparable to a knee abduction-adduction minimization algorithm (Baker et al., 1999) and better than a null space algorithm (Schwartz and Rozumalski, 2005) for this healthy subject population. PMID:27079622
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
Principal Component Analysis of Long-Lag,Wide-Pulse Gamma-Ray Burst Data
NASA Astrophysics Data System (ADS)
Peng, Zhao-Yang; Liu, Wen-Shuai
2014-09-01
We have carried out a Principal Component Analysis (PCA) of the temporal and spectral variables of 24 long-lag, wide-pulse gamma-ray bursts (GRBs) presented by Norris et al. (2005). Taking all eight temporal and spectral parameters into account, our analysis shows that four principal components are enough to describe the variation of the temporal and spectral data of long-lag bursts. In addition, the first-two principal components are dominated by the temporal variables while the third and fourth principal components are dominated by the spectral parameters.
NASA Astrophysics Data System (ADS)
Hu, Chen; Wang, Jian-Min; Ho, Luis C.; Ferland, Gary J.; Baldwin, Jack A.; Wang, Ye
2012-12-01
We report on a spectral principal component analysis (SPCA) of a sample of 816 quasars, selected to have small Fe II velocity shifts with spectral coverage in the rest wavelength range 3500-5500 Å. The sample is explicitly designed to mitigate spurious effects on SPCA induced by Fe II velocity shifts. We improve the algorithm of SPCA in the literature and introduce a new quantity, the fractional-contribution spectrum, that effectively identifies the emission features encoded in each eigenspectrum. The first eigenspectrum clearly records the power-law continuum and very broad Balmer emission lines. Narrow emission lines dominate the second eigenspectrum. The third eigenspectrum represents the Fe II emission and a component of the Balmer lines with kinematically similar intermediate-velocity widths. Correlations between the weights of the eigenspectra and parametric measurements of line strength and continuum slope confirm the above interpretation for the eigenspectra. Monte Carlo simulations demonstrate the validity of our method to recognize cross talk in SPCA and firmly rule out a single-component model for broad Hβ. We also present the results of SPCA for four other samples that contain quasars in bins of larger Fe II velocity shift; similar eigenspectra are obtained. We propose that the Hβ-emitting region has two kinematically distinct components: one with very large velocities whose strength correlates with the continuum shape and another with more modest, intermediate velocities that is closely coupled to the gas that gives rise to Fe II emission.
Radon transform, bispectra, and principal component analysis for RTS invariant image retrieval
NASA Astrophysics Data System (ADS)
Shao, Yuan; Celenk, Mehmet
1999-08-01
An image retrieval method is presented based on shape similarity measure for multimedia and imaging database system. In the proposed algorithm, the spatial and spectral properties of images are combined using the Radon transform, bispectra, and principal components analysis. For each model image in the database, the original 2D image data are reduced to a set of 1D projections via the Radon transform, and then a feature vector is calculated from the bispectra of the resultant 1D functions. The principal component analysis is applied to further reduce the dimension of the feature vector so that it can be stored along with the original image in the database at a small cost of memory. The derived feature vector is considered as the index or key of the corresponding image, which uniquely identifies the image independent of rotation, translation, and scaling. For image retrieval, the data feature vector is computed for a query image, and matched against the feature vectors of all the model images in the database using the Tanimoto similarity measure. The closely matching images are brought out as the searching results. The proposed technique has been tested on a large image database. The experimental results show that the retrieval accuracy is very high even for query images with low signal-to-noise ratio.
Registration of dynamic dopamine D2 receptor images using principal component analysis.
Acton, P D; Pilowsky, L S; Suckling, J; Brammer, M J; Ell, P J
1997-11-01
This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D2 receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D2 receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P<0.001) improvements in image registration, leading to alignment errors in x and y of about 25% of the alternative methods, with reductions in autocorrelations over time. It could also be applied to align image sequences which the other methods failed completely to register, particularly 123I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the chi2 of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. PMID:9371874
Federolf, Peter A
2016-02-01
Human upright posture is maintained by postural movements, which can be quantified by "principal movements" (PMs) obtained through a principal component analysis (PCA) of kinematic marker data. The current study expands the concept of "principal movements" in analogy to Newton's mechanics by defining "principal position" (PP), "principal velocity" (PV), and "principal acceleration" (PA) and demonstrates that a linear combination of PPs and PAs determines the center of pressure (COP) variance in upright standing. Twenty-one subjects equipped with 27-markers distributed over all body segments stood on a force plate while their postural movements were recorded using a standard motion tracking system. A PCA calculated on normalized and weighted posture vectors yielded the PPs and their time derivatives, the PVs and PAs. COP variance explained by the PPs and PAs was obtained through a regression analysis. The first 15 PMs quantified 99.3% of the postural variance and explained 99.60% ± 0.22% (mean ± SD) of the anterior-posterior and 98.82 ± 0.74% of the lateral COP variance in the 21 subjects. Calculation of the PMs thus provides a data-driven definition of variables that simultaneously quantify the state of the postural system (PPs and PVs) and the activity of the neuro-muscular controller (PAs). Since the definition of PPs and PAs is consistent with Newton's mechanics, these variables facilitate studying how mechanical variables, such as the COP motion, are governed by the postural control system. PMID:26768228
A high-performance computing toolset for relatedness and principal component analysis of SNP data.
Zheng, Xiuwen; Levine, David; Shen, Jess; Gogarten, Stephanie M; Laurie, Cathy; Weir, Bruce S
2012-12-15
Genome-wide association studies are widely used to investigate the genetic basis of diseases and traits, but they pose many computational challenges. We developed gdsfmt and SNPRelate (R packages for multi-core symmetric multiprocessing computer architectures) to accelerate two key computations on SNP data: principal component analysis (PCA) and relatedness analysis using identity-by-descent measures. The kernels of our algorithms are written in C/C++ and highly optimized. Benchmarks show the uniprocessor implementations of PCA and identity-by-descent are ∼8-50 times faster than the implementations provided in the popular EIGENSTRAT (v3.0) and PLINK (v1.07) programs, respectively, and can be sped up to 30-300-fold by using eight cores. SNPRelate can analyse tens of thousands of samples with millions of SNPs. For example, our package was used to perform PCA on 55 324 subjects from the 'Gene-Environment Association Studies' consortium studies. PMID:23060615
Ghosh, Antara; Barman, Soma
2016-06-01
Gene systems are extremely complex, heterogeneous, and noisy in nature. Many statistical tools which are used to extract relevant feature from genes provide fuzzy and ambiguous information. High-dimensional gene expression database available in public domain usually contains thousands of genes. Efficient prediction method is demanding nowadays for accurate identification of such database. Euclidean distance measurement and principal component analysis methods are applied on such databases to identify the genes. In both methods, prediction algorithm is based on homology search approach. Digital Signal Processing technique along with statistical method is used for analysis of genes in both cases. A two-level decision logic is used for gene classification as healthy or cancerous. This binary logic minimizes the prediction error and improves prediction accuracy. Superiority of the method is judged by receiver operating characteristic curve. PMID:26877227
Bozorgzadeh, Bardia; Covey, Daniel P; Garris, Paul A; Mohseni, Pedram
2015-01-01
This paper reports on field-programmable gate array (FPGA) implementation of a digital signal processing (DSP) unit for real-time processing of neurochemical data obtained by fast-scan cyclic voltammetry (FSCV) at a carbonfiber microelectrode (CFM). The DSP unit comprises a decimation filter and two embedded processors to process the FSCV data obtained by an oversampling recording front-end and differentiate the target analyte from interferents in real time with a chemometrics algorithm using principal component regression (PCR). Interfaced with an integrated, FSCV-sensing front-end, the DSP unit successfully resolves the dopamine response from that of pH change and background-current drift, two common dopamine interferents, in flow injection analysis involving bolus injection of mixed solutions, as well as in biological tests involving electrically evoked, transient dopamine release in the forebrain of an anesthetized rat. PMID:26737451
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Formaggio, A. R.; Dossantos, J. R.; Dias, L. A. V.
1984-01-01
Automatic pre-processing technique called Principal Components (PRINCO) in analyzing LANDSAT digitized data, for land use and vegetation cover, on the Brazilian cerrados was evaluated. The chosen pilot area, 223/67 of MSS/LANDSAT 3, was classified on a GE Image-100 System, through a maximum-likehood algorithm (MAXVER). The same procedure was applied to the PRINCO treated image. PRINCO consists of a linear transformation performed on the original bands, in order to eliminate the information redundancy of the LANDSAT channels. After PRINCO only two channels were used thus reducing computer effort. The original channels and the PRINCO channels grey levels for the five identified classes (grassland, "cerrado", burned areas, anthropic areas, and gallery forest) were obtained through the MAXVER algorithm. This algorithm also presented the average performance for both cases. In order to evaluate the results, the Jeffreys-Matusita distance (JM-distance) between classes was computed. The classification matrix, obtained through MAXVER, after a PRINCO pre-processing, showed approximately the same average performance in the classes separability.
Identifying apple surface defects using principal components analysis and artifical neural networks
Technology Transfer Automated Retrieval System (TEKTRAN)
Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...
Hypothesis Generation in Latent Growth Curve Modeling Using Principal Components
ERIC Educational Resources Information Center
Davison, Mark L.
2008-01-01
While confirmatory latent growth curve analyses provide procedures for testing hypotheses about latent growth curves underlying data, one must first derive hypotheses to be tested. It is argued that such hypotheses should be generated from a combination of theory and exploratory data analyses. An exploratory components analysis is described and…
SU-E-CAMPUS-T-06: Radiochromic Film Analysis Based On Principal Components
Wendt, R
2014-06-15
Purpose: An algorithm to convert the color image of scanned EBT2 radiochromic film [Ashland, Covington KY] into a dose map was developed based upon a principal component analysis. The sensitive layer of the EBT2 film is colored so that the background streaks arising from variations in thickness and scanning imperfections may be distinguished by color from the dose in the exposed film. Methods: Doses of 0, 0.94, 1.9, 3.8, 7.8, 16, 32 and 64 Gy were delivered to radiochromic films by contact with a calibrated Sr-90/Y-90 source. They were digitized by a transparency scanner. Optical density images were calculated and analyzed by the method of principal components. The eigenimages of the 0.94 Gy film contained predominantly noise, predominantly background streaking, and background streaking plus the source, respectively, in order from the smallest to the largest eigenvalue. Weighting the second and third eigenimages by −0.574 and 0.819 respectively and summing them plus the constant 0.012 yielded a processed optical density image with negligible background streaking. This same weighted sum was transformed to the red, green and blue space of the scanned images and applied to all of the doses. The curve of processed density in the middle of the source versus applied dose was fit by a twophase association curve. A film was sandwiched between two polystyrene blocks and exposed edge-on to a different Y-90 source. This measurement was modeled with the GATE simulation toolkit [Version 6.2, OpenGATE Collaboration], and the on-axis depth-dose curves were compared. Results: The transformation defined using the principal component analysis of the 0.94 Gy film minimized streaking in the backgrounds of all of the films. The depth-dose curves from the film measurement and simulation are indistinguishable. Conclusion: This algorithm accurately converts EBT2 film images to dose images while reducing noise and minimizing background streaking. Supported by a sponsored research
Brown, Niklas; Bichler, Sebastian; Fiedler, Meike; Alt, Wilfried
2016-06-01
Detection of neuro-muscular fatigue in strength training is difficult, due to missing criterion measures and the complexity of fatigue. Thus, a variety of methods are used to determine fatigue. The aim of this study was to use a principal component analysis (PCA) on a multifactorial data-set based on kinematic measurements to determine fatigue. Twenty participants (strength training experienced, 60% male) executed 3 sets of 3 exercises with 50 (12 repetitions), 75 (12 repetitions) and 100%-12 RM (RM). Data were collected with a 3D accelerometer and analysed by a newly developed algorithm to evaluate parameters for each repetition. A PCA with six variables was carried out on the results. A fatigue factor was computed based on the loadings on the first component. One-way ANOVA with Bonferroni post hoc analysis was calculated to test for differences between the intensity levels. All six input variables had high loadings on the first component. The ANOVA showed a significant difference between intensities (p < 0.001). Post-hoc analysis revealed a difference between 100% and the lower intensities (p < 0.05) and no difference between 50 and 75%-12RM. Based on these results, it is possible to distinguish between fatigued and non-fatigued sets of strength training. PMID:27111008
Identification of the isomers using principal component analysis (PCA) method
NASA Astrophysics Data System (ADS)
Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur
2016-03-01
In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.
Statistical shape modeling of human cochlea: alignment and principal component analysis
NASA Astrophysics Data System (ADS)
Poznyakovskiy, Anton A.; Zahnert, Thomas; Fischer, Björn; Lasurashvili, Nikoloz; Kalaidzidis, Yannis; Mürbe, Dirk
2013-02-01
The modeling of the cochlear labyrinth in living subjects is hampered by insufficient resolution of available clinical imaging methods. These methods usually provide resolutions higher than 125 μm. This is too crude to record the position of basilar membrane and, as a result, keep apart even the scala tympani from other scalae. This problem could be avoided by the means of atlas-based segmentation. The specimens can endure higher radiation loads and, conversely, provide better-resolved images. The resulting surface can be used as the seed for atlas-based segmentation. To serve this purpose, we have developed a statistical shape model (SSM) of human scala tympani based on segmentations obtained from 10 μCT image stacks. After segmentation, we aligned the resulting surfaces using Procrustes alignment. This algorithm was slightly modified to accommodate single models with nodes which do not necessarily correspond to salient features and vary in number between models. We have established correspondence by mutual proximity between nodes. Rather than using the standard Euclidean norm, we have applied an alternative logarithmic norm to improve outlier treatment. The minimization was done using BFGS method. We have also split the surface nodes along an octree to reduce computation cost. Subsequently, we have performed the principal component analysis of the training set with Jacobi eigenvalue algorithm. We expect the resulting method to help acquiring not only better understanding in interindividual variations of cochlear anatomy, but also a step towards individual models for pre-operative diagnostics prior to cochlear implant insertions.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure. PMID:18238065
Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate
Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.
2009-01-01
Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786
Principal components analysis based control of a multi-dof underactuated prosthetic hand
2010-01-01
Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis. PMID:20416036
Yekani Khoei, Elmira; Hassannejad, Reza; Mozaffari Tazehkand, Behzad
2015-01-01
Introduction: Body sensor network is a key technology that is used for supervising the physiological information from a long distance that enables physicians to predict and diagnose effectively the different conditions. These networks include small sensors with the ability of sensing where there are some limitations in calculating and energy. Methods: In the present research, a new compression method based on the analysis of principal components and wavelet transform is used to increase the coherence. In the present method, the first analysis of the main principles is to find the principal components of the data in order to increase the coherence for increasing the similarity between the data and compression rate. Then, according to the ability of wavelet transform, data are decomposed to different scales. In restoration process of data only special parts are restored and some parts of the data that include noise are omitted. By noise omission, the quality of the sent data increases and good compression could be obtained. Results: Pilates practices were executed among twelve patients with various dysfunctions. The results showed 0.7210, 0.8898, 0.6548, 0.6765, 0.6009, 0.7435, 0.7651, 0.7623, 0.7736, 0.8596, 0.8856 and 0.7102 compression ratios in proposed method and 0.8256, 0.9315, 0.9340, 0.9509, 0.8998, 0.9556, 0.9732, 0.9580, 0.8046, 0.9448, 0.9573 and 0.9440 compression ratios in previous method (Tseng algorithm). Conclusion: Comparing compression rates and prediction errors with the available results show the exactness of the proposed method. PMID:25901292
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.
2015-12-01
The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.
Relationship between gilt behavior and meat quality using principal component analysis.
Ros-Freixedes, R; Sadler, L J; Onteru, S K; Smith, R M; Young, J M; Johnson, A K; Lonergan, S M; Huff-Lonergan, E; Dekkers, J C M; Rothschild, M F
2014-01-01
Pig on-farm behavior has important repercussions on pig welfare and performance, but generally its relationship with meat quality is not well understood. We used principal component analysis to determine the relationship between meat quality traits, feeding patterns, scale activity, and number of conflict-avoidance interactions. The first principal component indicated that gilts with greater daily feed intake stayed longer in the feeder and their meat had increased intramuscular fat (IMF), was lighter in color, and, in the second principal component, had better juiciness, tenderness, chewiness, and flavor. Meat from gilts with lower scale activity scores appeared to have more IMF but greater drip losses (DL). The third principal component suggested that dominant gilts could gain priority access to the feeder, eating more and growing fatter. In conclusion, except for the slight associations with IMF and DL, gilt scale activity and conflict-avoidance behaviors were not good indicators of final meat quality attributes. PMID:23921217
Bellemans, A.; Munafò, A.; Magin, T. E.; Degrez, G.; Parente, A.
2015-06-15
This article considers the development of reduced chemistry models for argon plasmas using Principal Component Analysis (PCA) based methods. Starting from an electronic specific Collisional-Radiative model, a reduction of the variable set (i.e., mass fractions and temperatures) is proposed by projecting the full set on a reduced basis made up of its principal components. Thus, the flow governing equations are only solved for the principal components. The proposed approach originates from the combustion community, where Manifold Generated Principal Component Analysis (MG-PCA) has been developed as a successful reduction technique. Applications consider ionizing shock waves in argon. The results obtained show that the use of the MG-PCA technique enables for a substantial reduction of the computational time.
Dimension reduction of non-equilibrium plasma kinetic models using principal component analysis
NASA Astrophysics Data System (ADS)
Peerenboom, Kim; Parente, Alessandro; Kozák, Tomáš; Bogaerts, Annemie; Degrez, Gérard
2015-04-01
The chemical complexity of non-equilibrium plasmas poses a challenge for plasma modeling because of the computational load. This paper presents a dimension reduction method for such chemically complex plasmas based on principal component analysis (PCA). PCA is used to identify a low-dimensional manifold in chemical state space that is described by a small number of parameters: the principal components. Reduction is obtained since continuity equations only need to be solved for these principal components and not for all the species. Application of the presented method to a CO2 plasma model including state-to-state vibrational kinetics of CO2 and CO demonstrates the potential of the PCA method for dimension reduction. A manifold described by only two principal components is able to predict the CO2 to CO conversion at varying ionization degrees very accurately.
Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class
Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...
NASA Technical Reports Server (NTRS)
Williams, D. L.; Borden, F. Y.
1977-01-01
Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.
NASA Astrophysics Data System (ADS)
Wang, Ning; Li, Qiong; El-Latif, Ahmed A. Abd; Peng, Jialiang; Niu, Xiamu
2013-04-01
In recent years, verification based on thermal face images has been extensively studied because of its invariance to illumination and immunity to forgery. However, most of them have not given full consideration to high-verification performance and singular within-class scatter matrix problems. We propose a novel thermal face verification algorithm, which is named two-directional two-dimensional modified Fisher principal component analysis. First, two-dimensional principal component analysis (2-DPCA) is utilized to extract the optimal projective vector in the row direction. Then, 2-D modified Fisher linear discriminant analysis is implemented to overcome the singular within-class scatter matrix problem of the 2-DPCA space in the column direction. Comparative experiments on the natural visible and infrared facial expression thermal face subdatabase demonstrate that the proposed approach outperforms state-of-the-art methods in terms of verification performance.
Principal component analysis of satellite passive microwave data over sea ice
NASA Astrophysics Data System (ADS)
Rothrock, D. A.; Thomas, Donald R.; Thorndike, Alan S.
1988-03-01
The 10 channels of scanning multichannel microwave radiometer data for the Arctic are examined by correlation, multiple regression, and principal component analyses. Data from April, August, and December 1979 are analyzed separately. Correlations are greater than 0.8 for all pairs of channels except some of those involving the 37-GHz channels. Multiple regression shows a high degree of redundancy in the data; three channels can explain between 94.0 and 99.6% of the total variance. A principal component analysis of the covariance matrix shows that the first two eigenvalues contain 99.7% of the variance. Only the first two principal components contain variance due to the mixture of surface types. Three component mixtures (water, first-year ice, and multiyear ice) can be resolved in two dimensions. The presence of other ice types, such as second-year ice or wet ice, makes determination of ice age ambiguous in some geographic regions. Winds and surface temperature variations cause variations in the first three principal components. The confounding of these variables with mixture of surface types is a major source of error in resolving the mixture. The variance in principal components 3 through 10 is small and entirely due to variability in the pure type signatures. Determination of winds and surface temperature, as well as other variables, from this information is limited by instrument noise and presently unknown large-scale variability in the emissivity of sea ice.
ERIC Educational Resources Information Center
Lautenschlager, Gary J.
The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent…
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule? The model rule contains nine major components, as follows: (a) Compliance schedule. (b) Waste... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION...
40 CFR 60.2998 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule? The model rule contains nine major components, as follows: (a) Compliance schedule. (b)...
40 CFR 60.2570 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.2570 Section 60.2570 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of the model rule? The model rule contains the eleven major components listed in paragraphs (a)...
Thireou, Trias; Strauss, Ludwig G; Dimitrakopoulou-Strauss, Antonia; Kontaxakis, George; Pavlopoulos, Sotiris; Santos, Andres
2003-01-01
Performance evaluation of principal component analysis (PCA) of dynamic F-18-FDG-PET studies of patients with recurrent colorectal cancer. Principal component images (PCI) of 17 iteratively reconstructed data sets were visually and quantitatively evaluated. The F-18-FDG compartment model parameters were estimated using polynomial regression. All structures were present in PCI1. PCI2 was correlated with the vascular component and PCI3 with the tumor. The vessel density in the tumor was estimated with a correlation coefficient equal to 0.834. PCA supports the visual interpretation of dynamic F-18-FDG-PET studies, facilitates the application of compartment modeling and is a promising quantification technique. PMID:12573889
Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising
Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian
2015-01-01
Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566
A high-performance computing toolset for relatedness and principal component analysis of SNP data
Zheng, Xiuwen; Levine, David; Shen, Jess; Gogarten, Stephanie M.; Laurie, Cathy; Weir, Bruce S.
2012-01-01
Summary: Genome-wide association studies are widely used to investigate the genetic basis of diseases and traits, but they pose many computational challenges. We developed gdsfmt and SNPRelate (R packages for multi-core symmetric multiprocessing computer architectures) to accelerate two key computations on SNP data: principal component analysis (PCA) and relatedness analysis using identity-by-descent measures. The kernels of our algorithms are written in C/C++ and highly optimized. Benchmarks show the uniprocessor implementations of PCA and identity-by-descent are ∼8–50 times faster than the implementations provided in the popular EIGENSTRAT (v3.0) and PLINK (v1.07) programs, respectively, and can be sped up to 30–300-fold by using eight cores. SNPRelate can analyse tens of thousands of samples with millions of SNPs. For example, our package was used to perform PCA on 55 324 subjects from the ‘Gene-Environment Association Studies’ consortium studies. Availability and implementation: gdsfmt and SNPRelate are available from R CRAN (http://cran.r-project.org), including a vignette. A tutorial can be found at https://www.genevastudy.org/Accomplishments/software. Contact: zhengx@u.washington.edu PMID:23060615
Detection of single unit activity from the rat vagus using cluster analysis of principal components.
Horn, Charles C; Friedman, Mark I
2003-01-30
In vivo recordings from subdiaphragmatic vagal afferent nerves generally lack the resolution to distinguish single unit activity. Several methods for data acquisition and analysis were combined to produce a high degree of reliability in recording electrophysiological signals from gastrointestinal and hepatic afferent fibers in the rat. Recordings with low noise were achieved by paralysis of the respiratory muscles and by pinning the nerve to a recording platform. Single unit activity was isolated using principal component (PC) analysis and cluster cutting of data in multi-dimensional space (1-3 PCs). Cluster assignments were determined by a semi-automated approach using the k-means algorithm. The accuracy of single unit classification was assessed by checking inter-spike intervals (ISIs) to determine the length of the refractory period, and by cross-correlation analysis to assess whether single units were mistakenly split into more than one cluster. These analyses produced up to four isolated single units from each nerve filament (a bundle of nerve fibers), and typically it was possible to further increase yield by recording from several nerve filaments simultaneously using an array of electrodes. PMID:12573473
Principal component analysis in the wavelet domain: new features for underwater object recognition
NASA Astrophysics Data System (ADS)
Okimoto, Gordon S.; Lemonds, David W.
1999-08-01
Principal component analysis (PCA) in the wavelet domain provides powerful features for underwater object recognition applications. The multiresolution analysis of the Morlet wavelet transform (MWT) is used to pre-process echo returns from targets ensonified by biologically motivated broadband signal. PCA is then used to compress and denoise the resulting time-scale signal representation for presentation to a hierarchical neural network for object classification. Wavelet/PCA features combined with multi-aspect data fusion and neural networks have resulted in impressive underwater object recognition performance using backscatter data generated by simulate dolphin echolocation clicks and bat- like linear frequency modulated upsweeps. For example, wavelet/PCA features extracted from LFM echo returns have resulted in correct classification rates of 98.6 percent over a six target suite, which includes two mine simulators and four clutter objects. For the same data, ROC analysis of the two-class mine-like versus non-mine-like problem resulted in a probability of detection of 0.981 and a probability of false alarm of 0.032 at the 'optimal' operating point. The wavelet/PCA feature extraction algorithm is currently being implemented in VLSI for use in small, unmanned underwater vehicles designed for mine- hunting operations in shallow water environments.
NASA Astrophysics Data System (ADS)
Dai, Yimian; Wu, Yiquan; Song, Yu
2016-07-01
When facing extremely complex infrared background, due to the defect of l1 norm based sparsity measure, the state-of-the-art infrared patch-image (IPI) model would be in a dilemma where either the dim targets are over-shrinked in the separation or the strong cloud edges remains in the target image. In order to suppress the strong edges while preserving the dim targets, a weighted infrared patch-image (WIPI) model is proposed, incorporating structural prior information into the process of infrared small target and background separation. Instead of adopting a global weight, we allocate adaptive weight to each column of the target patch-image according to its patch structure. Then the proposed WIPI model is converted to a column-wise weighted robust principal component analysis (CWRPCA) problem. In addition, a target unlikelihood coefficient is designed based on the steering kernel, serving as the adaptive weight for each column. Finally, in order to solve the CWPRCA problem, a solution algorithm is developed based on Alternating Direction Method (ADM). Detailed experiment results demonstrate that the proposed method has a significant improvement over the other nine classical or state-of-the-art methods in terms of subjective visual quality, quantitative evaluation indexes and convergence rate.
Integrative and regularized principal component analysis of multiple sources of data.
Liu, Binghui; Shen, Xiaotong; Pan, Wei
2016-06-15
Integration of data of disparate types has become increasingly important to enhancing the power for new discoveries by combining complementary strengths of multiple types of data. One application is to uncover tumor subtypes in human cancer research in which multiple types of genomic data are integrated, including gene expression, DNA copy number, and DNA methylation data. In spite of their successes, existing approaches based on joint latent variable models require stringent distributional assumptions and may suffer from unbalanced scales (or units) of different types of data and non-scalability of the corresponding algorithms. In this paper, we propose an alternative based on integrative and regularized principal component analysis, which is distribution-free, computationally efficient, and robust against unbalanced scales. The new method performs dimension reduction simultaneously on multiple types of data, seeking data-adaptive sparsity and scaling. As a result, in addition to feature selection for each type of data, integrative clustering is achieved. Numerically, the proposed method compares favorably against its competitors in terms of accuracy (in identifying hidden clusters), computational efficiency, and robustness against unbalanced scales. In particular, compared with a popular method, the new method was competitive in identifying tumor subtypes associated with distinct patient survival patterns when applied to a combined analysis of DNA copy number, mRNA expression, and DNA methylation data in a glioblastoma multiforme study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26756854
Multi-point accelerometric detection and principal component analysis of heart sounds.
De Panfilis, S; Moroni, C; Peccianti, M; Chiru, O M; Vashkevich, V; Parisi, G; Cassone, R
2013-03-01
Heart sounds are a fundamental physiological variable that provide a unique insight into cardiac semiotics. However a deterministic and unambiguous association between noises in cardiac dynamics is far from being accomplished yet due to many and different overlapping events which contribute to the acoustic emission. The current computer-based capacities in terms of signal detection and processing allow one to move from the standard cardiac auscultation, even in its improved forms like electronic stethoscopes or hi-tech phonocardiography, to the extraction of information on the cardiac activity previously unexplored. In this report, we present a new equipment for the detection of heart sounds, based on a set of accelerometric sensors placed in contact with the chest skin on the precordial area, and are able to measure simultaneously the vibration induced on the chest surface by the heart's mechanical activity. By utilizing advanced algorithms for the data treatment, such as wavelet decomposition and principal component analysis, we are able to condense the spatially extended acoustic information and to provide a synthetical representation of the heart activity. We applied our approach to 30 adults, mixed per gender, age and healthiness, and correlated our results with standard echocardiographic examinations. We obtained a 93% concordance rate with echocardiography between healthy and unhealthy hearts, including minor abnormalities such as mitral valve prolapse. PMID:23400007
Robust principal component analysis-based four-dimensional computed tomography
NASA Astrophysics Data System (ADS)
Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai
2011-06-01
The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames.
Robust principal component analysis-based four-dimensional computed tomography
Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai
2012-01-01
The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the ‘background’ or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the ‘motion’ or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490
Robust principal component analysis-based four-dimensional computed tomography.
Gao, Hao; Cai, Jian-Feng; Shen, Zuowei; Zhao, Hongkai
2011-06-01
The purpose of this paper for four-dimensional (4D) computed tomography (CT) is threefold. (1) A new spatiotemporal model is presented from the matrix perspective with the row dimension in space and the column dimension in time, namely the robust PCA (principal component analysis)-based 4D CT model. That is, instead of viewing the 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we perceive it as a mixture of low-rank matrix and sparse matrix to explore the maximum temporal coherence of the spatial structure among phases. Here the low-rank matrix corresponds to the 'background' or reference state, which is stationary over time or similar in structure; the sparse matrix stands for the 'motion' or time-varying component, e.g., heart motion in cardiac imaging, which is often either approximately sparse itself or can be sparsified in the proper basis. Besides 4D CT, this robust PCA-based 4D CT model should be applicable in other imaging problems for motion reduction or/and change detection with the least amount of data, such as multi-energy CT, cardiac MRI, and hyperspectral imaging. (2) A dynamic strategy for data acquisition, i.e. a temporally spiral scheme, is proposed that can potentially maintain similar reconstruction accuracy with far fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements, and hence the radiation dose, by acquiring complementary data in different phases while reducing redundant measurements of the common background structure. (3) An accurate, efficient, yet simple-to-implement algorithm based on the split Bregman method is developed for solving the model problem with sparse representation in tight frames. PMID:21540490
2012-01-01
Background In spite of the advances made in the design of dexterous anthropomorphic hand prostheses, these sophisticated devices still lack adequate control interfaces which could allow amputees to operate them in an intuitive and close-to-natural way. In this study, an anthropomorphic five-fingered robotic hand, actuated by six motors, was used as a prosthetic hand emulator to assess the feasibility of a control approach based on Principal Components Analysis (PCA), specifically conceived to address this problem. Since it was demonstrated elsewhere that the first two principal components (PCs) can describe the whole hand configuration space sufficiently well, the controller here employed reverted the PCA algorithm and allowed to drive a multi-DoF hand by combining a two-differential channels EMG input with these two PCs. Hence, the novelty of this approach stood in the PCA application for solving the challenging problem of best mapping the EMG inputs into the degrees of freedom (DoFs) of the prosthesis. Methods A clinically viable two DoFs myoelectric controller, exploiting two differential channels, was developed and twelve able-bodied participants, divided in two groups, volunteered to control the hand in simple grasp trials, using forearm myoelectric signals. Task completion rates and times were measured. The first objective (assessed through one group of subjects) was to understand the effectiveness of the approach; i.e., whether it is possible to drive the hand in real-time, with reasonable performance, in different grasps, also taking advantage of the direct visual feedback of the moving hand. The second objective (assessed through a different group) was to investigate the intuitiveness, and therefore to assess statistical differences in the performance throughout three consecutive days. Results Subjects performed several grasp, transport and release trials with differently shaped objects, by operating the hand with the myoelectric PCA-based controller
Image-based pupil plane characterization via principal component analysis for EUVL tools
NASA Astrophysics Data System (ADS)
Levinson, Zac; Burbine, Andrew; Verduijn, Erik; Wood, Obert; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.
2016-03-01
We present an approach to image-based pupil plane amplitude and phase characterization using models built with principal component analysis (PCA). PCA is a statistical technique to identify the directions of highest variation (principal components) in a high-dimensional dataset. A polynomial model is constructed between the principal components of through-focus intensity for the chosen binary mask targets and pupil amplitude or phase variation. This method separates model building and pupil characterization into two distinct steps, thus enabling rapid pupil characterization following data collection. The pupil plane variation of a zone-plate lens from the Semiconductor High-NA Actinic Reticle Review Project (SHARP) at Lawrence Berkeley National Laboratory will be examined using this method. Results will be compared to pupil plane characterization using a previously proposed methodology where inverse solutions are obtained through an iterative process involving least-squares regression.
Savage, J.C.
1988-01-01
Geodetic measurements of deformation at Long Valley caldera provide two examples of the application of principal component analysis. A 40-line trilateration network surrounding the caldera was surveyed in midsummer 1983, 1984, 1985, 1986, and 1987. Principal component analysis indicates that the observed deformation can be represented by a single coherent source. The time dependence for that source displays a rapid rate of deformation in 1983-1984 followed by less rapid but uniform rate in the 1984-1987 interval. The spatial factor seems consistent with expansion of a magma chamber beneath the caldera plus some shallow right-lateral slip on a vertical fault in the south moat of the caldera. An independent principal component analysis of the 1982, 1983, 1984, 1985, 1986, and 1987 leveling across the caldera requires two self-coherent sources to explain the deformation. -from Author
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability. PMID:23729852
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end. PMID:27364333
Choi, Young Hae; Kim, Hye Kyong; Hazekamp, Arno; Erkelens, Cornelis; Lefeber, Alfons W M; Verpoorte, Robert
2004-06-01
The metabolomic analysis of 12 Cannabis sativa cultivars was carried out by 1H NMR spectroscopy and multivariate analysis techniques. Principal component analysis (PCA) of the 1H NMR spectra showed a clear discrimination between those samples by principal component 1 (PC1) and principal component 3 (PC3) in cannabinoid fraction. The loading plot of PC value obtained from all 1)H NMR signals shows that Delta9-tetrahydrocannabinolic acid (THCA) and cannabidiolic acid (CBDA) are important metabolites to differentiate the cultivars from each other. The discrimination of the cultivars could also be obtained from a water extract containing carbohydrates and amino acids. The level of sucrose, glucose, asparagine, and glutamic acid are found to be major discriminating metabolites of these cultivars. This method allows an efficient differentiation between cannabis cultivars without any prepurification steps. PMID:15217272
Bengtsson, M; Wallström, S; Sjöholm, M; Grönlund, R; Anderson, B; Larsson, A; Karlsson, S; Kröll, S; Svanberg, S
2005-08-01
A method combining laser-induced fluorescence and principal component analysis to detect and discriminate between algal and fungal growth on insulator materials has been studied. Eight fungal cultures and four insulator materials have been analyzed. Multivariate classifications were utilized to characterize the insulator material, and fungal growth could readily be distinguished from a clean surface. The results of the principal component analyses make it possible to distinguish between algae infected, fungi infected, and clean silicone rubber materials. The experiments were performed in the laboratory using a fiber-optic fluorosensor that consisted of a nitrogen laser and an optical multi-channel analyzer system. PMID:16105213
NASA Astrophysics Data System (ADS)
cao, Xiuming; Song, Jinjie; Zhang, Caipo
This work focused on principal component analysis and Choquet integral to structure a model of diagnose Parkinson disease. The proper value of Sugeno measure is vital to a diagnostic model. This paper aims at providing a method of using principal component analysis to obtain the sugeno measure. In this diagnostic model, there are two key elements. One is the goodness of fit that the degrees of evidential support for attribute. The other is the importance of attribute itself. The instances of Parkinson disease illuminate that the method is effective.
Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao
2016-03-15
In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods. PMID:26977692
Sengupta, S.K.; Boyle, J.S.
1993-05-01
Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution.
ERIC Educational Resources Information Center
Ackermann, Margot Elise; Morrow, Jennifer Ann
2008-01-01
The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…
Gabor feature-based apple quality inspection using kernel principal component analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
Automated inspection of apple quality involves computer recognition of good apples and blemished apples based on geometric or statistical features derived from apple images. This paper introduces a Gabor feature-based kernel, principal component analysis (PCA) method; by combining Gabor wavelet rep...
ERIC Educational Resources Information Center
Brusco, Michael J.; Singh, Renu; Steinley, Douglas
2009-01-01
The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…
NASA Astrophysics Data System (ADS)
Khodasevich, Mikhail A.; Trofimova, Darya V.; Nezalzova, Elena I.
2011-02-01
Principal component analysis of UV-VIS-NIR transmission spectra of matured wine distillates (1-40 years aged) produced by three Moldavian manufacturers allows to characterize with sufficient certainty the eleven chemical parameters of considered alcoholic beverages: contents of acetaldehyde, ethyl acetate, furfural, vanillin, syringic aldehyde and acid, etc.
NASA Astrophysics Data System (ADS)
Khodasevich, Mikhail A.; Trofimova, Darya V.; Nezalzova, Elena I.
2010-09-01
Principal component analysis of UV-VIS-NIR transmission spectra of matured wine distillates (1-40 years aged) produced by three Moldavian manufacturers allows to characterize with sufficient certainty the eleven chemical parameters of considered alcoholic beverages: contents of acetaldehyde, ethyl acetate, furfural, vanillin, syringic aldehyde and acid, etc.
ERIC Educational Resources Information Center
Hendrix, Dean
2010-01-01
This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…
PRINCIPAL COMPONENT REGRESSION OF NEAR-INFRARED REFLECTANCE SPECTRA FOR BEEF TENDERNESS PREDICTION
Technology Transfer Automated Retrieval System (TEKTRAN)
Tenderness is the most important factor affecting consumer perception of eating quality of meat. In this paper, the development of the principal component regression (PCR) models to relate near-infrared (NIR) reflectance spectra of raw meat to Warner-Bratzler (WB) shear force measurement of cooked m...
40 CFR 60.2570 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c... the model rule? 60.2570 Section 60.2570 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components...
Adaptive Spatial Filtering with Principal Component Analysis for Biomedical Photoacoustic Imaging
NASA Astrophysics Data System (ADS)
Nagaoka, Ryo; Yamazaki, Rena; Saijo, Yoshifumi
Photoacoustic (PA) signal is very sensitive to noise generated by peripheral equipment such as power supply, stepping motor or semiconductor laser. Band-pass filter is not effective because the frequency bandwidth of the PA signal also covers the noise frequency. The objective of the present study is to reduce the noise by using an adaptive spatial filter with principal component analysis (PCA).
40 CFR 60.1580 - What are the principal components of the model rule?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small...
Hip fracture risk estimation based on principal component analysis of QCT atlas: a preliminary study
NASA Astrophysics Data System (ADS)
Li, Wenjun; Kornak, John; Harris, Tamara; Lu, Ying; Cheng, Xiaoguang; Lang, Thomas
2009-02-01
We aim to capture and apply 3-dimensional bone fragility features for fracture risk estimation. Using inter-subject image registration, we constructed a hip QCT atlas comprising 37 patients with hip fractures and 38 age-matched controls. In the hip atlas space, we performed principal component analysis to identify the principal components (eigen images) that showed association with hip fracture. To develop and test a hip fracture risk model based on the principal components, we randomly divided the 75 QCT scans into two groups, one serving as the training set and the other as the test set. We applied this model to estimate a fracture risk index for each test subject, and used the fracture risk indices to discriminate the fracture patients and controls. To evaluate the fracture discrimination efficacy, we performed ROC analysis and calculated the AUC (area under curve). When using the first group as the training group and the second as the test group, the AUC was 0.880, compared to conventional fracture risk estimation methods based on bone densitometry, which had AUC values ranging between 0.782 and 0.871. When using the second group as the training group, the AUC was 0.839, compared to densitometric methods with AUC values ranging between 0.767 and 0.807. Our results demonstrate that principal components derived from hip QCT atlas are associated with hip fracture. Use of such features may provide new quantitative measures of interest to osteoporosis.
Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis
NASA Astrophysics Data System (ADS)
Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.
2013-06-01
Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.
40 CFR 60.1580 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule.... (d) Monitoring and stack testing. (e) Recordkeeping and reporting. Model Rule—Increments of Progress...
40 CFR 60.5080 - What are the principal components of the model rule?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the model rule? 60.5080 Section 60.5080 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and Compliance Times for Existing Sewage Sludge Incineration Units Use of Model Rule § 60.5080 What are the principal components of the model rule? The model rule contains the nine major...
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
ERIC Educational Resources Information Center
Hunley-Jenkins, Keisha Janine
2012-01-01
This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…
ERIC Educational Resources Information Center
Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.
2007-01-01
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…
The Use of Exploratory Factor Analysis and Principal Components Analysis in Communication Research.
ERIC Educational Resources Information Center
Park, Hee Sun; Dailey, Rene; Lemus, Daisy
2002-01-01
Discusses the distinct purposes of principal components analysis (PCA) and exploratory factor analysis (EFA), using two data sets as examples. Reviews the use of each technique in three major communication journals: "Communication Monographs,""Human Communication Research," and "Communication Research." Finds that the use of EFA and PCA indicates…
ERIC Educational Resources Information Center
Chou, Yeh-Tai; Wang, Wen-Chung
2010-01-01
Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…
A Case of Extreme Simplicity of the Core Matrix in Three-Mode Principal Components Analysis.
ERIC Educational Resources Information Center
Murakami, Takashi; ten Berge, Jos M. F.; Kiers, Henk A. L.
1998-01-01
In three-mode principal components analysis, the P x Q x R core matrix "G" can be transformed to simple structure before it is interpreted. This paper shows that, when P=QR-1, G can be transformed to have nearly all the elements equal to values specified a priori. A closed-form solution for this transformation is offered. (SLD)
Detection of abnormal cardiac activity using principal component analysis--a theoretical study.
Greisas, Ariel; Zafrir, Zohar; Zlochiver, Sharon
2015-01-01
Electrogram-guided ablation has been recently developed for allowing better detection and localization of abnormal atrial activity that may be the source of arrhythmogeneity. Nevertheless, no clear indication for the benefit of using electrograms guided ablation over empirical ablation was established thus far, and there is a clear need of improving the localization of cardiac arrhythmogenic targets for ablation. In this paper, we propose a new approach for detection and localization of irregular cardiac activity during ablation procedures that is based on dimension reduction algorithms and principal component analysis (PCA). Using an 8×8 electrode array, our method produces manifolds that allow easy visualization and detection of possible arrhythmogenic ablation targets characterized by irregular conduction. We employ mathematical modeling and computer simulations to demonstrate the feasibility of the new approach for two well established arrhythmogenic sources for irregular conduction--spiral waves and patchy fibrosis. Our results show that the PCA method can differentiate between focal ectopic activity and spiral wave activity, as these two types of activity produce substantially different manifold shapes. Moreover, the technique allows the detection of spiral wave cores and their general meandering and drifting pattern. Fibrotic patches larger than 2 mm(2) could also be visualized using the PCA method, both for quiescent atrial tissue and for tissue exhibiting spiral wave activity. We envision that this method, contingent to further numerical and experimental validation studies in more complex, realistic geometrical configurations and with clinical data, can improve existing atrial ablation mapping capabilities, thus increasing success rates and optimizing arrhythmia management. PMID:25073163
Principal Component Analyses of Topographic Profiles of Terrestrial and Venusian Uplifts
NASA Astrophysics Data System (ADS)
Stoddard, P. R.; Jurdy, D. M.
2013-12-01
Topographic data can provide valuable insight into some of the fundamental surface features and processes for our nearest terrestrial neighbor, Venus. Many of its topographic features remain enigmatic, even after two decades of post-Magellan study. Past examination of topographic profiles showed some similarities between certain classes of features (hotspots and regiones; rifts and chasmata), but further quantitative analysis could better define correlations between terrestrial and venusian features. We undertake such a quantitative comparison of topographic features on Venus and Earth through Principal Component Analysis (PCA). In Principal Component Analysis, the correlation coefficients for pairs of features are determined from their normalized average topographic profiles. These correlation coefficients are then arranged in a covariance matrix, diagonalized to find the eigenvalues, or principal components, which can be displayed graphically as profiles. The principal components assess the degree of similarity and variability of the shapes of the average profiles. PCA thus offers an independent and objective mode of comparison. In a preliminary comparison of uplifts on the two planets, PCA was applied for four terrestrial hotspots and three venusian regiones. The trace of the covariance matrix summed to 700, and the first three principal components (values 472, 160, 51) together accounted for over 97% the shape of the profiles. Thus, the topographic profiles of the 7 uplifts on Venus and Earth can be very nearly described with just 3 independent components. The shapes of the major principal components give some insight into the uplift process on each planet. The first, (i.e. largest) component, corresponding to a simple uplift is positive for all 7 features. The second component, however, differs by planet: negative for the 4 terrestrial hotspots, but positive for the venusian regiones. Indeed, the shape of this component's profile (central low flanked by two
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee
2015-03-01
Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.
Principal component analysis based carrier removal approach for Fourier transform profilometry
NASA Astrophysics Data System (ADS)
Feng, Shijie; Chen, Qian; Zuo, Chao
2015-05-01
To handle the issue of the nonlinear carrier phase due to the divergent illumination commonly adopted in the fringe projection measurement, we propose a principal component analysis (PCA) based carrier removal method for Fourier transform profilometry. By PCA, the method can decompose the nonlinear carrier phase map into several principal components, where the phase of the carrier can be extracted from the first dominant component acquired. It is effective and requires less human intervention since no data points need to be collected from the reference plane in advance compared with traditional methods. Further, the influence of the lens distortion is considered thus the carrier can be determined more accurately. Our experiment shows the validity of the proposed approach.
Gautam, R.S.; Singh, D.; Mittal, A.
2007-07-01
Present paper proposes an algorithm for hotspot (sub-surface fire) detection in NOAA/AVHRR images in Jharia region of India by employing Principal Component Analysis (PCA) and fusion technique. Proposed technique is very simple to implement and is more adaptive in comparison to thresholding, multi-thresholding and contextual algorithms. The algorithm takes into account the information of AVHRR channels 1, 2, 3, 4 and vegetation indices NDVI and MSAVI for the required purpose. Proposed technique consists of three steps: (1) detection and removal of cloud and water pixels from preprocessed AVHRR image and screening out the noise of channel 3, (2) application of PCA on multi-channel information along with vegetation index information of NOAA/AVHRR image to obtain principal components, and (3) fusion of information obtained from principal component 1 and 2 to classify image pixels as hotspots. Image processing techniques are applied to fuse information in first two principal component images and no absolute threshold is incorporated to specify whether particular pixel belongs to hotspot class or not, hence, proposed method is adaptive in nature and works successfully for most of the AVHRR images with average 87.27% detection accuracy and 0.201% false alarm rate while comparing with ground truth points in Jharia region of India.
Efficient algorithm to compute mutually connected components in interdependent networks.
Hwang, S; Choi, S; Lee, Deokjae; Kahng, B
2015-02-01
Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559
Principal Component Analysis of Spectroscopic Imaging Data in Scanning Probe Microscopy
Jesse, Stephen; Kalinin, Sergei V
2009-01-01
The approach for data analysis in band excitation family of scanning probe microscopies based on principal component analysis (PCA) is explored. PCA utilizes the similarity between spectra within the image to select the relevant response components. For small signal variations within the image, the PCA components coincide with the results of deconvolution using simple harmonic oscillator model. For strong signal variations, the PCA allows effective approach to rapidly process, de-noise and compress the data. The extension of PCA for correlation function analysis is demonstrated. The prospects of PCA as a universal tool for data analysis and representation in multidimensional SPMs are discussed.
Na, Man Gyun; Oh, Seungrohk
2002-11-15
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis. PMID:21495371
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.
1984-01-01
A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.
NASA Astrophysics Data System (ADS)
Das, Atanu; Mukhopadhyay, Chaitali
2007-10-01
We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.
NASA Astrophysics Data System (ADS)
Yan, Li; Liu, Li
2010-07-01
A new method for accurate measurement of content of textile mixture by use of Fourier transform near infrared spectroscopy is put forward. The near infrared spectra of 56 samples with different cotton and polyester contents were obtained, in which 41 samples, 10 samples and 5 samples were used for the calibration set, validation set and prediction set respectively. Principal component analysis (PCA) was utilized for the spectra data compression. Principal component regression (PCR) model was developed. It indicates that the MAE is within 2.9% and the RMSE is less than 3.6% for the validation samples, which is suitable for the prediction of unknown samples. The PCR model was applied to predict unknown samples. Experimental results show that this approach by use of Fourier transform Near Infrared Spectroscopy can be used to quantitative analysis for textile fiber.
NASA Astrophysics Data System (ADS)
Guo, Hong-wei; Su, Bu-xin; Zhang, Jian-liang; Zhu, Meng-yi; Chang, Jian
2013-03-01
An updated approach to refining the core indicators of pulverized coal used for blast furnace injection based on principal component analysis is proposed in view of the disadvantages of the existing performance indicator system of pulverized coal used in blast furnaces. This presented method takes into account all the performance indicators of pulverized coal injection, including calorific value, igniting point, combustibility, reactivity, flowability, grindability, etc. Four core indicators of pulverized coal injection are selected and studied by using principal component analysis, namely, comprehensive combustibility, comprehensive reactivity, comprehensive flowability, and comprehensive grindability. The newly established core index system is not only beneficial to narrowing down current evaluation indices but also effective to avoid previous overlapping problems among indicators by mutually independent index design. Furthermore, a comprehensive property indicator is introduced on the basis of the four core indicators, and the injection properties of pulverized coal can be overall evaluated.
Magnetic anomaly detection (MAD) of ferromagnetic pipelines using principal component analysis (PCA)
NASA Astrophysics Data System (ADS)
Sheinker, Arie; Moldwin, Mark B.
2016-04-01
The magnetic anomaly detection (MAD) method is used for detection of visually obscured ferromagnetic objects. The method exploits the magnetic field originating from the ferromagnetic object, which constitutes an anomaly in the ambient earth’s magnetic field. Traditionally, MAD is used to detect objects with a magnetic field of a dipole structure, where far from the object it can be considered as a point source. In the present work, we expand MAD to the case of a non-dipole source, i.e. a ferromagnetic pipeline. We use principal component analysis (PCA) to calculate the principal components, which are then employed to construct an effective detector. Experiments conducted in our lab with real-world data validate the above analysis. The simplicity, low computational complexity, and the high detection rate make the proposed detector attractive for real-time, low power applications.
NASA Astrophysics Data System (ADS)
Kunnil, Joseph; Sarasanandarajah, Sivananthan; Chacko, Easaw; Reinisch, Lou
2006-05-01
The fluorescence spectra of Bacillus spores are measured at excitation wavelengths of 280, 310, 340, 370, and 400 nm. When cluster analysis is used with the principal-component analysis, the Bacillus globigii spores can be distinguished from the other species of Bacillus spores (B. cereus, B. popilliae, and B. thuringiensis). To test how robust the identification process is with the fluorescence spectra, the B. globigii is obtained from three separate preparations in different laboratories. Furthermore the fluorescence is measured before and after washing and redrying the B. globigii spores. Using the cluster analysis of the first two or three principal components of the fluorescence spectra, one is able to distinguish B. globigii spores from the other species, independent of preparing or washing the spores.
NASA Astrophysics Data System (ADS)
Milan, S. E.; Carter, J. A.; Korth, H.; Anderson, B. J.
2015-12-01
Principal component analysis is performed on Birkeland or field-aligned current (FAC) measurements from the Active Magnetosphere and Planetary Electrodynamics Response Experiment. Principal component analysis (PCA) identifies the patterns in the FACs that respond coherently to different aspects of geomagnetic activity. The regions 1 and 2 current system is shown to be the most reproducible feature of the currents, followed by cusp currents associated with magnetic tension forces on newly reconnected field lines. The cusp currents are strongly modulated by season, indicating that their strength is regulated by the ionospheric conductance at the foot of the field lines. PCA does not identify a pattern that is clearly characteristic of a substorm current wedge. Rather, a superposed epoch analysis of the currents associated with substorms demonstrates that there is not a single mode of response, but a complicated and subtle mixture of different patterns.
[Infrared spectroscopy analysis of SF6 using multiscale weighted principal component analysis].
Peng, Xi; Wang, Xian-Pei; Huang, Yun-Guang
2012-06-01
Infrared spectroscopy analysis of SF6 and its derivative is an important method for operating state assessment and fault diagnosis of the gas insulated switchgear (GIS). Traditional methods are complicated and inefficient, and the results can vary with different subjects. In the present work, the feature extraction methods in machine learning are recommended to solve such diagnosis problem, and a multiscale weighted principal component analysis method is proposed. The proposed method combines the advantage of standard principal component analysis and multiscale decomposition to maximize the feature information in different scales, and modifies the importance of the eigenvectors in classification. The classification performance of the proposed method was demonstrated to be 3 to 4 times better than that of the standard PCA for the infrared spectra of SF6 and its derivative provided by Guangxi Research Institute of Electric Power. PMID:22870634
NASA Astrophysics Data System (ADS)
Devi, Seema; Panigrahi, Prasanta K.; Pradhan, Asima
2014-12-01
Intrinsic fluorescence spectra of the human normal, cervical intraepithelial neoplasia 1 (CIN1), CIN2, and cervical cancer tissue have been extracted by effectively combining the measured polarized fluorescence and polarized elastic scattering spectra. The efficacy of principal component analysis (PCA) to disentangle the collective behavior from smaller correlated clusters in a dimensionally reduced space in conjunction with the intrinsic fluorescence is examined. This combination unambiguously reveals the biochemical changes occurring with the progression of the disease. The differing activities of the dominant fluorophores, collagen, nicotinamide adenine dinucleotide, flavins, and porphyrin of different grades of precancers are clearly identified through a careful examination of the sectorial behavior of the dominant eigenvectors of PCA. To further classify the different grades, the Mahalanobis distance has been calculated using the scores of selected principal components.
Assessment of models for pedestrian dynamics with functional principal component analysis
NASA Astrophysics Data System (ADS)
Chraibi, Mohcine; Ensslen, Tim; Gottschalk, Hanno; Saadi, Mohamed; Seyfried, Armin
2016-06-01
Many agent based simulation approaches have been proposed for pedestrian flow. As such models are applied e.g. in evacuation studies, the quality and reliability of such models is of vital interest. Pedestrian trajectories are functional data and thus functional principal component analysis is a natural tool to assess the quality of pedestrian flow models beyond average properties. In this article we conduct functional Principal Component Analysis (PCA) for the trajectories of pedestrians passing through a bottleneck. In this way it is possible to assess the quality of the models not only on basis of average values but also by considering its fluctuations. We benchmark two agent based models of pedestrian flow against the experimental data using PCA average and stochastic features. Functional PCA proves to be an efficient tool to detect deviation between simulation and experiment and to assess quality of pedestrian models.
Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G.B.
2016-01-01
To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. PMID:26715629
NASA Astrophysics Data System (ADS)
Yan, D.; Cecil, T.; Gades, L.; Jacobsen, C.; Madden, T.; Miceli, A.
2016-01-01
We present a method using principal component analysis (PCA) to process x-ray pulses with severe shape variation where traditional optimal filter methods fail. We demonstrate that PCA is able to noise-filter and extract energy information from x-ray pulses despite their different shapes. We apply this method to a dataset from an x-ray thermal kinetic inductance detector which has severe pulse shape variation arising from position-dependent absorption.
APPLICATION OF PRINCIPAL COMPONENT ANALYSIS AND BAYESIAN DECOMPOSITION TO RELAXOGRAPHIC IMAGING
OCHS,M.F.; STOYANOVA,R.S.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.
1999-05-22
Recent developments in high field imaging have made possible the acquisition of high quality, low noise relaxographic data in reasonable imaging times. The datasets comprise a huge amount of information (>>1 million points) which makes rigorous analysis daunting. Here, the authors present results demonstrating that Principal Component Analysis (PCA) and Bayesian Decomposition (BD) provide powerful methods for relaxographic analysis of T{sub 1} recovery curves and editing of tissue type in resulting images.
NASA Astrophysics Data System (ADS)
Yan, D.; Cecil, T.; Gades, L.; Jacobsen, C.; Madden, T.; Miceli, A.
2016-07-01
We present a method using principal component analysis (PCA) to process x-ray pulses with severe shape variation where traditional optimal filter methods fail. We demonstrate that PCA is able to noise-filter and extract energy information from x-ray pulses despite their different shapes. We apply this method to a dataset from an x-ray thermal kinetic inductance detector which has severe pulse shape variation arising from position-dependent absorption.
Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G B
2016-04-01
To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. PMID:26715629
NASA Astrophysics Data System (ADS)
Jakovels, Dainis; Lihacova, Ilze; Kuzmina, Ilona; Spigulis, Janis
2013-11-01
Non-invasive and fast primary diagnostics of pigmented skin lesions is required due to frequent incidence of skin cancer - melanoma. Diagnostic potential of principal component analysis (PCA) for distant skin melanoma recognition is discussed. Processing of the measured clinical multi-spectral images (31 melanomas and 94 nonmalignant pigmented lesions) in the wavelength range of 450-950 nm by means of PCA resulted in 87 % sensitivity and 78 % specificity for separation between malignant melanomas and pigmented nevi.
NASA Astrophysics Data System (ADS)
Kesikoğlu, M. H.; Atasever, Ü. H.; Özkan, C.
2013-10-01
Change detection analyze means that according to observations made in different times, the process of defining the change detection occurring in nature or in the state of any objects or the ability of defining the quantity of temporal effects by using multitemporal data sets. There are lots of change detection techniques met in literature. It is possible to group these techniques under two main topics as supervised and unsupervised change detection. In this study, the aim is to define the land cover changes occurring in specific area of Kayseri with unsupervised change detection techniques by using Landsat satellite images belonging to different years which are obtained by the technique of remote sensing. While that process is being made, image differencing method is going to be applied to the images by following the procedure of image enhancement. After that, the method of Principal Component Analysis is going to be applied to the difference image obtained. To determine the areas that have and don't have changes, the image is grouped as two parts by Fuzzy C-Means Clustering method. For achieving these processes, firstly the process of image to image registration is completed. As a result of this, the images are being referred to each other. After that, gray scale difference image obtained is partitioned into 3 × 3 nonoverlapping blocks. With the method of principal component analysis, eigenvector space is gained and from here, principal components are reached. Finally, feature vector space consisting principal component is partitioned into two clusters using Fuzzy C-Means Clustering and after that change detection process has been done.
NASA Astrophysics Data System (ADS)
Duarte, M. S.; Pontes, M. J. C.; Ramos, C. S.
2016-01-01
The differentiation of chemical profiles from Piper arboreum tissues using near infrared (NIR) spectrometry and principal component analysis (PCA) was addressed. The NIR analyses were performed with a small quantity of dried and ground tissues. Differences in the chemical composition of leaf, stem, and root tissues were observed. The results obtained were compared to those produced by gas chromatography-mass spectrometry (GC-MS) as the reference method, confirming the NIR results.
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
NASA Astrophysics Data System (ADS)
Dovbeshko, G. I.; Repnytska, O. P.; Pererva, T.; Miruta, A.; Kosenkov, D.
2004-07-01
Conformation analysis of mutated DNA-bacteriophages (PLys-23, P23-2, P47- the numbers have been assigned by T. Pererva) induced by MS2 virus incorporated in Ecoli AB 259 Hfr 3000 has been done. Surface enhanced infrared absorption (SEIRA) spectroscopy and principal component analysis has been applied for solving this problem. The nucleic acids isolated from the mutated phages had a form of double stranded DNA with different modifications. The nucleic acid from phage P47 was undergone the structural rearrangement in the most degree. The shape and position ofthe fine structure of the Phosphate asymmetrical band at 1071cm-1 as well as the stretching OH vibration at 3370-3390 cm-1 has indicated to the appearance ofadditional OH-groups. The Z-form feature has been found in the base vibration region (1694 cm-1) and the sugar region (932 cm-1). A supposition about modification of structure of DNA by Z-fragments for P47 phage has been proposed. The P23-2 and PLys-23 phages have showed the numerous minor structural changes also. On the basis of SEIRA spectra we have determined the characteristic parameters of the marker bands of nucleic acid used for construction of principal components. Contribution of different spectral parameters of nucleic acids to principal components has been estimated.
Generalized multilevel function-on-scalar regression and principal component analysis.
Goldsmith, Jeff; Zipunnikov, Vadim; Schrack, Jennifer
2015-06-01
This manuscript considers regression models for generalized, multilevel functional responses: functions are generalized in that they follow an exponential family distribution and multilevel in that they are clustered within groups or subjects. This data structure is increasingly common across scientific domains and is exemplified by our motivating example, in which binary curves indicating physical activity or inactivity are observed for nearly 600 subjects over 5 days. We use a generalized linear model to incorporate scalar covariates into the mean structure, and decompose subject-specific and subject-day-specific deviations using multilevel functional principal components analysis. Thus, functional fixed effects are estimated while accounting for within-function and within-subject correlations, and major directions of variability within and between subjects are identified. Fixed effect coefficient functions and principal component basis functions are estimated using penalized splines; model parameters are estimated in a Bayesian framework using Stan, a programming language that implements a Hamiltonian Monte Carlo sampler. Simulations designed to mimic the application have good estimation and inferential properties with reasonable computation times for moderate datasets, in both cross-sectional and multilevel scenarios; code is publicly available. In the application we identify effects of age and BMI on the time-specific change in probability of being active over a 24-hour period; in addition, the principal components analysis identifies the patterns of activity that distinguish subjects and days within subjects. PMID:25620473
Principal Component Analysis Characterizes Shared Pathogenetics from Genome-Wide Association Studies
Chang, Diana; Keinan, Alon
2014-01-01
Genome-wide association studies (GWASs) have recently revealed many genetic associations that are shared between different diseases. We propose a method, disPCA, for genome-wide characterization of shared and distinct risk factors between and within disease classes. It flips the conventional GWAS paradigm by analyzing the diseases themselves, across GWAS datasets, to explore their “shared pathogenetics”. The method applies principal component analysis (PCA) to gene-level significance scores across all genes and across GWASs, thereby revealing shared pathogenetics between diseases in an unsupervised fashion. Importantly, it adjusts for potential sources of heterogeneity present between GWAS which can confound investigation of shared disease etiology. We applied disPCA to 31 GWASs, including autoimmune diseases, cancers, psychiatric disorders, and neurological disorders. The leading principal components separate these disease classes, as well as inflammatory bowel diseases from other autoimmune diseases. Generally, distinct diseases from the same class tend to be less separated, which is in line with their increased shared etiology. Enrichment analysis of genes contributing to leading principal components revealed pathways that are implicated in the immune system, while also pointing to pathways that have yet to be explored before in this context. Our results point to the potential of disPCA in going beyond epidemiological findings of the co-occurrence of distinct diseases, to highlighting novel genes and pathways that unsupervised learning suggest to be key players in the variability across diseases. PMID:25211452
NASA Astrophysics Data System (ADS)
Christensen, E. R.; Bzdusek, P. A.
2003-04-01
Anaerobic PCB dechlorination in aquatic sediments is a naturally occurring process that reduces the dioxin-like PCB toxicity. The PCB biphenyl structure is kept intact but the number of substituted chlorine atoms is reduced, primarily from the para and meta positions. Flanked para and meta chlorine dechlorination, as in process H/H', appears to be more common in-situ than flanked and unflanked para, and meta dechlorination as in process Q. Aroclors that are susceptible to these reactions include 1242, 1248, 1254, and 1260. These dechlorination reactions have recently been modeled by a least squares method for Ashtabula River, Ohio, and Fox River, Wisconsin sediments. Prior to modeling the dechlorination reactions for an ecosystem it is desirable to generate overall PCB source functions. One method to determine source functions is to use loading matrices of a factor analytical model. We have developed such models based both on a principal component approach including nonnegative oblique rotations, and positive matrix factorization (PMF). While the principal component method first requires an eigenvalue analysis of a covariance matrix, the PMF method is based on a direct least squares analysis considering simultaneously the loading and score matrices. Loading matrices obtained from the PMF method are somewhat sensitive to the initial guess of source functions. Preliminary work indicates that a hybrid approach considering first principal components and then PMF may offer an optimum solution. The relationship of PMF to conventional chemical mass balance modeling with or without some prior knowledge of source functions is also discussed.
Multivariate analysis of intracranial pressure (ICP) signal using principal component analysis.
Al-Zubi, N; Momani, L; Al-Kharabsheh, A; Al-Nuaimy, W
2009-01-01
The diagnosis and treatment of hydrocephalus and other neurological disorders often involve the acquisition and analysis of large amount of intracranial pressure (ICP) signal. Although the analysis and subsequent interpretation of this data is an essential part of the clinical management of the disorders, it is typically done manually by a trained clinician, and the difficulty in interpreting some of the features of this complex time series can sometimes lead to issues of subjectivity and reliability. This paper presents a method for the quantitative analysis of this data using a multivariate approach based on principal component analysis, with the aim of optimising symptom diagnosis, patient characterisation and treatment simulation and personalisation. In this method, 10 features are extracted from the ICP signal and principal components that represent these features are defined and analysed. Results from ICP traces of 40 patients show that the chosen features have relevant information about the ICP signal and can be represented with a few components of the PCA (approximately 91% of the total variance of the data is represented by the first four components of the PCA) and that these components can be helpful in characterising subgroups in the patient population that would otherwise not have been apparent. The introduction of supplementaty (non-ICP) variables has offered insight into additional groupings and relationships which may prove to be a fruitful avenue for exploration. PMID:19964826
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-01-01
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-01-01
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms. PMID:26262622
NASA Astrophysics Data System (ADS)
Kumar, Shailendra; Sharma, Rajiv Kumar
2015-12-01
Over the last four decades of research, numerous cell formation algorithms have been developed and tested, still this research remains of interest to this day. Appropriate manufacturing cells formation is the first step in designing a cellular manufacturing system. In cellular manufacturing, consideration to manufacturing flexibility and production-related data is vital for cell formation. The consideration to this realistic data makes cell formation problem very complex and tedious. It leads to the invention and implementation of highly advanced and complex cell formation methods. In this paper an effort has been made to develop a simple and easy to understand/implement manufacturing cell formation heuristic procedure with considerations to the number of production and manufacturing flexibility-related parameters. The heuristic minimizes inter-cellular movement cost/time. Further, the proposed heuristic is modified for the application of principal component analysis and Taguchi's method. Numerical example is explained to illustrate the approach. A refinement in the results is observed with adoption of principal component analysis and Taguchi's method.
NASA Astrophysics Data System (ADS)
Li, Xiaozhou; Yang, Tianyue; Li, Siqi; Wang, Deli; Song, Youtao; Zhang, Su
2016-03-01
This paper attempts to investigate the feasibility of using Raman spectroscopy for the diagnosis of colon cancer. Serum taken from 75 healthy volunteers, 65 colon cancer patients and 60 post-operation colon cancer patients was measured in this experiment. In the Raman spectra of all three groups, the Raman peaks at 750, 1083, 1165, 1321, 1629 and 1779 cm-1 assigned to nucleic acids, amino acids and chromophores were consistently observed. All of these six Raman peaks were observed to have statistically significant differences between groups. For quantitative analysis, the multivariate statistical techniques of principal component analysis (PCA) and k nearest neighbour analysis (KNN) were utilized to develop diagnostic algorithms for classification. In PCA, several peaks in the principal component (PC) loadings spectra were identified as the major contributors to the PC scores. Some of the peaks in the PC loadings spectra were also reported as characteristic peaks for colon tissues, which implies correlation between peaks in PC loadings spectra and those in the original Raman spectra. KNN was also performed on the obtained PCs, and a diagnostic accuracy of 91.0% and a specificity of 92.6% were achieved.
Two Strategies to Speed up Connected Component LabelingAlgorithms
Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji
2005-11-13
This paper presents two new strategies to speed up connectedcomponent labeling algorithms. The first strategy employs a decisiontreeto minimize the work performed in the scanning phase of connectedcomponent labeling algorithms. The second strategy uses a simplifiedunion-find data structure to represent the equivalence information amongthe labels. For 8-connected components in atwo-dimensional (2D) image,the first strategy reduces the number of neighboring pixels visited from4 to7/3 on average. In various tests, using a decision tree decreases thescanning time by a factor of about 2. The second strategy uses a compactrepresentation of the union-find data structure. This strategysignificantly speeds up the labeling algorithms. We prove analyticallythat a labeling algorithm with our simplified union-find structure hasthe same optimal theoretical time complexity as do the best labelingalgorithms. By extensive experimental measurements, we confirm theexpected performance characteristics of the new labeling algorithms anddemonstrate that they are faster than other optimal labelingalgorithms.
Mourka, A.; Mazilu, M.; Wright, E. M.; Dholakia, K.
2013-01-01
The modal characterization of various families of beams is a topic of current interest. We recently reported a new method for the simultaneous determination of both the azimuthal and radial mode indices for light fields possessing orbital angular momentum. The method is based upon probing the far-field diffraction pattern from a random aperture and using the recorded data as a ‘training set'. We then transform the observed data into uncorrelated variables using the principal component analysis (PCA) algorithm. Here, we show the generic nature of this approach for the simultaneous determination of the modal parameters of Hermite-Gaussian and Bessel beams. This reinforces the widespread applicability of this method for applications including information processing, spectroscopy and manipulation. Additionally, preliminary results demonstrate reliable decomposition of superpositions of Laguerre-Gaussians, yielding the intensities and relative phases of each constituent mode. Thus, this approach represents a powerful method for characterizing the optical multi-dimensional Hilbert space. PMID:23478330
NASA Astrophysics Data System (ADS)
de Siqueira e Oliveira, Fernanda S.; Giana, Hector E.; Silveira, Landulfo, Jr.
2012-03-01
It has been proposed a method based on Raman spectroscopy for identification of different microorganisms involved in bacterial urinary tract infections. Spectra were collected from different bacterial colonies (Gram negative: E. coli, K. pneumoniae, P. mirabilis, P. aeruginosa, E. cloacae and Gram positive: S. aureus and Enterococcus sp.), grown in culture medium (Agar), using a Raman spectrometer with a fiber Raman probe (830 nm). Colonies were scraped from Agar surface placed in an aluminum foil for Raman measurements. After pre-processing, spectra were submitted to a Principal Component Analysis and Mahalanobis distance (PCA/MD) discrimination algorithm. It has been found that the mean Raman spectra of different bacterial species show similar bands, being the S. aureus well characterized by strong bands related to carotenoids. PCA/MD could discriminate Gram positive bacteria with sensitivity and specificity of 100% and Gram negative bacteria with good sensitivity and high specificity.
NASA Astrophysics Data System (ADS)
de Siqueira e Oliveira, Fernanda SantAna; Giana, Hector Enrique; Silveira, Landulfo
2012-10-01
A method, based on Raman spectroscopy, for identification of different microorganisms involved in bacterial urinary tract infections has been proposed. Spectra were collected from different bacterial colonies (Gram-negative: Escherichia coli, Klebsiella pneumoniae, Proteus mirabilis, Pseudomonas aeruginosa and Enterobacter cloacae, and Gram-positive: Staphylococcus aureus and Enterococcus spp.), grown on culture medium (agar), using a Raman spectrometer with a fiber Raman probe (830 nm). Colonies were scraped from the agar surface and placed on an aluminum foil for Raman measurements. After preprocessing, spectra were submitted to a principal component analysis and Mahalanobis distance (PCA/MD) discrimination algorithm. We found that the mean Raman spectra of different bacterial species show similar bands, and S. aureus was well characterized by strong bands related to carotenoids. PCA/MD could discriminate Gram-positive bacteria with sensitivity and specificity of 100% and Gram-negative bacteria with sensitivity ranging from 58 to 88% and specificity ranging from 87% to 99%.
NASA Technical Reports Server (NTRS)
Feldman, Sandra C.
1987-01-01
Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.
NASA Astrophysics Data System (ADS)
Mehrjoo, Saeed; Bashiri, Mahdi
2013-05-01
Production planning and control (PPC) systems have to deal with rising complexity and dynamics. The complexity of planning tasks is due to some existing multiple variables and dynamic factors derived from uncertainties surrounding the PPC. Although literatures on exact scheduling algorithms, simulation approaches, and heuristic methods are extensive in production planning, they seem to be inefficient because of daily fluctuations in real factories. Decision support systems can provide productive tools for production planners to offer a feasible and prompt decision in effective and robust production planning. In this paper, we propose a robust decision support tool for detailed production planning based on statistical multivariate method including principal component analysis and logistic regression. The proposed approach has been used in a real case in Iranian automotive industry. In the presence of existing multisource uncertainties, the results of applying the proposed method in the selected case show that the accuracy of daily production planning increases in comparison with the existing method.
Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy.
Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee
2016-04-30
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features. PMID:27071414
Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee
2016-04-01
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.
PCA of PCA: principal component analysis of partial covering absorption in NGC 1365
NASA Astrophysics Data System (ADS)
Parker, M. L.; Walton, D. J.; Fabian, A. C.; Risaliti, G.
2014-06-01
We analyse 400 ks of XMM-Newton data on the active galactic nucleus NGC 1365 using principal component analysis (PCA) to identify model-independent spectral components. We find two significant components and demonstrate that they are qualitatively different from those found in MCG-6-30-15 using the same method. As the variability in NGC 1365 is known to be due to changes in the parameters of a partial covering neutral absorber, this shows that the same mechanism cannot be the driver of variability in MCG-6-30-15. By examining intervals where the spectrum shows relatively low absorption we separate the effects of intrinsic source variability, including signatures of relativistic reflection, from variations in the intervening absorption. We simulate the principal components produced by different physical variations, and show that PCA provides a clear distinction between absorption and reflection as the drivers of variability in AGN spectra. The simulations are shown to reproduce the PCA spectra of both NGC 1365 and MCG-6-30-15, and further demonstrate that the dominant cause of spectral variability in these two sources requires a qualitatively different mechanism.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
Principal components analysis of reward prediction errors in a reinforcement learning task.
Sambrook, Thomas D; Goslin, Jeremy
2016-01-01
Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found. PMID:26196667
Dien, Joseph; Spencer, Kevin M; Donchin, Emanuel
2003-10-01
Recent research indicates that novel stimuli elicit at least two distinct components, the Novelty P3 and the P300. The P300 is thought to be elicited when a context updating mechanism is activated by a wide class of deviant events. The functional significance of the Novelty P3 is uncertain. Identification of the generator sources of the two components could provide additional information about their functional significance. Previous localization efforts have yielded conflicting results. The present report demonstrates that the use of principal components analysis (PCA) results in better convergence with knowledge about functional neuroanatomy than did previous localization efforts. The results are also more convincing than that obtained by two alternative methods, MUSIC-RAP and the Minimum Norm. Source modeling on 129-channel data with BESA and BrainVoyager suggests the P300 has sources in the temporal-parietal junction whereas the Novelty P3 has sources in the anterior cingulate. PMID:14561451
Stashenko, Elena E; Martínez, Jairo R; Ruíz, Carlos A; Arias, Ginna; Durán, Camilo; Salgar, William; Cala, Mónica
2010-01-01
Chromatographic (GC/flame ionization detection, GC/MS) and statistical analyses were applied to the study of essential oils and extracts obtained from flowers, leaves, and stems of Lippia origanoides plants, growing wild in different Colombian regions. Retention indices, mass spectra, and standard substances were used in the identification of 139 substances detected in these essential oils and extracts. Principal component analysis allowed L. origanoides classification into three chemotypes, characterized according to their essential oil major components. Alpha- and beta-phellandrenes, p-cymene, and limonene distinguished chemotype A; carvacrol and thymol were the distinctive major components of chemotypes B and C, respectively. Pinocembrin (5,7-dihydroxyflavanone) was found in L. origanoides chemotype A supercritical fluid (CO(2)) extract at a concentration of 0.83+/-0.03 mg/g of dry plant material, which makes this plant an interesting source of an important bioactive flavanone with diverse potential applications in cosmetic, food, and pharmaceutical products. PMID:19950347
Principal component analysis of global maps of the total electronic content
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2014-03-01
In this paper we present results of the spatial distribution analysis of the total electron content (TEC) performed by the Principal Component Analysis (PCA) with the use of global maps of TEC provided by the JPL laboratory (Jet Propulsion Laboratory, NASA, USA) for the period from 2004 to 2010. We show that the obtained components of the decomposition of TEC essentially depend on the representation of the initial data and the method of their preliminary processing. We propose a technique for data centering that allows us to take into account the influence of diurnal and seasonal factors. We establish a correlation between amplitudes of the first components of the decomposition of TEC (connected with the equatorial anomaly) and the solar activity index F10.7, as well as with the flow of high energy particles of the solar wind.
A methodology to estimate probability of occurrence of floods using principal component analysis
NASA Astrophysics Data System (ADS)
castro Heredia, L. M.; Gironas, J. A.
2014-12-01
Flood events and debris flows are characterized by a very rapid response of basins to precipitation, often resulting in loss of life and property damage. Complex topography with steep slopes and narrow valleys increase the likelihood of having these events. An early warning system (EWS) is a tool that allows anticipating a hazardous event, which in turns provides time for an early response to reduce negative impacts. These EWS's can rely on very powerful and computer-demanding models to predict flow discharges and inundation zones, which require data typically unavailable. Instead, simpler EWŚs based on a statistical analysis of observed hydro-meteorological data could be a good alternative. In this work we propose a methodology for estimating the probability of exceedance of maximum flowdischarges using principal components analysis (PCA). In the method we first perform a spatio-temporal cross-correlation analysis between extreme flows data and daily meteorological records for the last 15 days prior to the day of the flood event. We then use PCA to create synthetic variables which are representative of the meteorological variables associated with the flood event (i.e. cumulative rainfall and minimum temperature). Finally, we developed a model to explain the probability of exceedance using the principal components. The methodology was applied to a basin in the foothill area of Santiago, Chile, for which all the extreme events between 1970 and 2013 were analyzed.Results show that elevation rather than distance or location within the contributing basin is what mainly explains the statistical correlation between meteorologicalrecords and flood events. Two principal components were found that explain more than 90% of the total variance of the accumulated rainfalls and minimum temperatures. One component was formed with cumulative rainfall from 3 to 15 days prior to the event, whereas the other one was formed with the minimum temperatures for the last 2 days preceding
Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed
2015-01-01
Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. PMID:25277860
The application of principal component analysis to quantify technique in sports.
Federolf, P; Reid, R; Gilgien, M; Haugen, P; Smith, G
2014-06-01
Analyzing an athlete's "technique," sport scientists often focus on preselected variables that quantify important aspects of movement. In contrast, coaches and practitioners typically describe movements in terms of basic postures and movement components using subjective and qualitative features. A challenge for sport scientists is finding an appropriate quantitative methodology that incorporates the holistic perspective of human observers. Using alpine ski racing as an example, this study explores principal component analysis (PCA) as a mathematical method to decompose a complex movement pattern into its main movement components. Ski racing movements were recorded by determining the three-dimensional coordinates of 26 points on each skier which were subsequently interpreted as a 78-dimensional posture vector at each time point. PCA was then used to determine the mean posture and principal movements (PMk ) carried out by the athletes. The first four PMk contained 95.5 ± 0.5% of the variance in the posture vectors which quantified changes in body inclination, vertical or fore-aft movement of the trunk, and distance between skis. In summary, calculating PMk offered a data-driven, quantitative, and objective method of analyzing human movement that is similar to how human observers such as coaches or ski instructors would describe the movement. PMID:22436088