Sample records for principal component space

  1. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  2. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    PubMed

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  3. Free energy landscape of a biomolecule in dihedral principal component space: sampling convergence and correspondence between structures and minima.

    PubMed

    Maisuradze, Gia G; Leitner, David M

    2007-05-15

    Dihedral principal component analysis (dPCA) has recently been developed and shown to display complex features of the free energy landscape of a biomolecule that may be absent in the free energy landscape plotted in principal component space due to mixing of internal and overall rotational motion that can occur in principal component analysis (PCA) [Mu et al., Proteins: Struct Funct Bioinfo 2005;58:45-52]. Another difficulty in the implementation of PCA is sampling convergence, which we address here for both dPCA and PCA using a tetrapeptide as an example. We find that for both methods the sampling convergence can be reached over a similar time. Minima in the free energy landscape in the space of the two largest dihedral principal components often correspond to unique structures, though we also find some distinct minima to correspond to the same structure. 2007 Wiley-Liss, Inc.

  4. Molecular dynamics in principal component space.

    PubMed

    Michielssens, Servaas; van Erp, Titus S; Kutzner, Carsten; Ceulemans, Arnout; de Groot, Bert L

    2012-07-26

    A molecular dynamics algorithm in principal component space is presented. It is demonstrated that sampling can be improved without changing the ensemble by assigning masses to the principal components proportional to the inverse square root of the eigenvalues. The setup of the simulation requires no prior knowledge of the system; a short initial MD simulation to extract the eigenvectors and eigenvalues suffices. Independent measures indicated a 6-7 times faster sampling compared to a regular molecular dynamics simulation.

  5. The Complexity of Human Walking: A Knee Osteoarthritis Study

    PubMed Central

    Kotti, Margarita; Duffell, Lynsey D.; Faisal, Aldo A.; McGregor, Alison H.

    2014-01-01

    This study proposes a framework for deconstructing complex walking patterns to create a simple principal component space before checking whether the projection to this space is suitable for identifying changes from the normality. We focus on knee osteoarthritis, the most common knee joint disease and the second leading cause of disability. Knee osteoarthritis affects over 250 million people worldwide. The motivation for projecting the highly dimensional movements to a lower dimensional and simpler space is our belief that motor behaviour can be understood by identifying a simplicity via projection to a low principal component space, which may reflect upon the underlying mechanism. To study this, we recruited 180 subjects, 47 of which reported that they had knee osteoarthritis. They were asked to walk several times along a walkway equipped with two force plates that capture their ground reaction forces along 3 axes, namely vertical, anterior-posterior, and medio-lateral, at 1000 Hz. Data when the subject does not clearly strike the force plate were excluded, leaving 1–3 gait cycles per subject. To examine the complexity of human walking, we applied dimensionality reduction via Probabilistic Principal Component Analysis. The first principal component explains 34% of the variance in the data, whereas over 80% of the variance is explained by 8 principal components or more. This proves the complexity of the underlying structure of the ground reaction forces. To examine if our musculoskeletal system generates movements that are distinguishable between normal and pathological subjects in a low dimensional principal component space, we applied a Bayes classifier. For the tested cross-validated, subject-independent experimental protocol, the classification accuracy equals 82.62%. Also, a novel complexity measure is proposed, which can be used as an objective index to facilitate clinical decision making. This measure proves that knee osteoarthritis subjects exhibit more variability in the two-dimensional principal component space. PMID:25232949

  6. Spectral decomposition of asteroid Itokawa based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Koga, Sumire C.; Sugita, Seiji; Kamata, Shunichi; Ishiguro, Masateru; Hiroi, Takahiro; Tatsumi, Eri; Sasaki, Sho

    2018-01-01

    The heliocentric stratification of asteroid spectral types may hold important information on the early evolution of the Solar System. Asteroid spectral taxonomy is based largely on principal component analysis. However, how the surface properties of asteroids, such as the composition and age, are projected in the principal-component (PC) space is not understood well. We decompose multi-band disk-resolved visible spectra of the Itokawa surface with principal component analysis (PCA) in comparison with main-belt asteroids. The obtained distribution of Itokawa spectra projected in the PC space of main-belt asteroids follows a linear trend linking the Q-type and S-type regions and is consistent with the results of space-weathering experiments on ordinary chondrites and olivine, suggesting that this trend may be a space-weathering-induced spectral evolution track for S-type asteroids. Comparison with space-weathering experiments also yield a short average surface age (< a few million years) for Itokawa, consistent with the cosmic-ray-exposure time of returned samples from Itokawa. The Itokawa PC score distribution exhibits asymmetry along the evolution track, strongly suggesting that space weathering has begun saturated on this young asteroid. The freshest spectrum found on Itokawa exhibits a clear sign for space weathering, indicating again that space weathering occurs very rapidly on this body. We also conducted PCA on Itokawa spectra alone and compared the results with space-weathering experiments. The obtained results indicate that the first principal component of Itokawa surface spectra is consistent with spectral change due to space weathering and that the spatial variation in the degree of space weathering is very large (a factor of three in surface age), which would strongly suggest the presence of strong regional/local resurfacing process(es) on this small asteroid.

  7. Principal Cluster Axes: A Projection Pursuit Index for the Preservation of Cluster Structures in the Presence of Data Reduction

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.; Henson, Robert

    2012-01-01

    A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…

  8. Self-aggregation in scaled principal component space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Chris H.Q.; He, Xiaofeng; Zha, Hongyuan

    2001-10-05

    Automatic grouping of voluminous data into meaningful structures is a challenging task frequently encountered in broad areas of science, engineering and information processing. These data clustering tasks are frequently performed in Euclidean space or a subspace chosen from principal component analysis (PCA). Here we describe a space obtained by a nonlinear scaling of PCA in which data objects self-aggregate automatically into clusters. Projection into this space gives sharp distinctions among clusters. Gene expression profiles of cancer tissue subtypes, Web hyperlink structure and Internet newsgroups are analyzed to illustrate interesting properties of the space.

  9. The dimensionality of stellar chemical space using spectra from the Apache Point Observatory Galactic Evolution Experiment

    NASA Astrophysics Data System (ADS)

    Price-Jones, Natalie; Bovy, Jo

    2018-03-01

    Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.

  10. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  11. Matrix partitioning and EOF/principal component analysis of Antarctic Sea ice brightness temperatures

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.; Mueller, J. L.; Zwally, H. J.

    1984-01-01

    A field of measured anomalies of some physical variable relative to their time averages, is partitioned in either the space domain or the time domain. Eigenvectors and corresponding principal components of the smaller dimensioned covariance matrices associated with the partitioned data sets are calculated independently, then joined to approximate the eigenstructure of the larger covariance matrix associated with the unpartitioned data set. The accuracy of the approximation (fraction of the total variance in the field) and the magnitudes of the largest eigenvalues from the partitioned covariance matrices together determine the number of local EOF's and principal components to be joined by any particular level. The space-time distribution of Nimbus-5 ESMR sea ice measurement is analyzed.

  12. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  13. Dynamic of consumer groups and response of commodity markets by principal component analysis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  14. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  15. Discrimination of gender-, speed-, and shoe-dependent movement patterns in runners using full-body kinematics.

    PubMed

    Maurer, Christian; Federolf, Peter; von Tscharner, Vinzenz; Stirling, Lisa; Nigg, Benno M

    2012-05-01

    Changes in gait kinematics have often been analyzed using pattern recognition methods such as principal component analysis (PCA). It is usually just the first few principal components that are analyzed, because they describe the main variability within a dataset and thus represent the main movement patterns. However, while subtle changes in gait pattern (for instance, due to different footwear) may not change main movement patterns, they may affect movements represented by higher principal components. This study was designed to test two hypotheses: (1) speed and gender differences can be observed in the first principal components, and (2) small interventions such as changing footwear change the gait characteristics of higher principal components. Kinematic changes due to different running conditions (speed - 3.1m/s and 4.9 m/s, gender, and footwear - control shoe and adidas MicroBounce shoe) were investigated by applying PCA and support vector machine (SVM) to a full-body reflective marker setup. Differences in speed changed the basic movement pattern, as was reflected by a change in the time-dependent coefficient derived from the first principal. Gender was differentiated by using the time-dependent coefficient derived from intermediate principal components. (Intermediate principal components are characterized by limb rotations of the thigh and shank.) Different shoe conditions were identified in higher principal components. This study showed that different interventions can be analyzed using a full-body kinematic approach. Within the well-defined vector space spanned by the data of all subjects, higher principal components should also be considered because these components show the differences that result from small interventions such as footwear changes. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  16. Complexity of free energy landscapes of peptides revealed by nonlinear principal component analysis.

    PubMed

    Nguyen, Phuong H

    2006-12-01

    Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems. (c) 2006 Wiley-Liss, Inc.

  17. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  18. Principal Components Analysis of a JWST NIRSpec Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Rauscher, Bernard J.; Wen, Yiting; hide

    2013-01-01

    We present principal component analysis (PCA) of a flight-representative James Webb Space Telescope NearInfrared Spectrograph (NIRSpec) Detector Subsystem. Although our results are specific to NIRSpec and its T - 40 K SIDECAR ASICs and 5 m cutoff H2RG detector arrays, the underlying technical approach is more general. We describe how we measured the systems response to small environmental perturbations by modulating a set of bias voltages and temperature. We used this information to compute the systems principal noise components. Together with information from the astronomical scene, we show how the zeroth principal component can be used to calibrate out the effects of small thermal and electrical instabilities to produce cosmetically cleaner images with significantly less correlated noise. Alternatively, if one were designing a new instrument, one could use a similar PCA approach to inform a set of environmental requirements (temperature stability, electrical stability, etc.) that enabled the planned instrument to meet performance requirements

  19. Multivariate analysis of light scattering spectra of liquid dairy products

    NASA Astrophysics Data System (ADS)

    Khodasevich, M. A.

    2010-05-01

    Visible light scattering spectra from the surface layer of samples of commercial liquid dairy products are recorded with a colorimeter. The principal component method is used to analyze these spectra. Vectors representing the samples of dairy products in a multidimensional space of spectral counts are projected onto a three-dimensional subspace of principal components. The magnitudes of these projections are found to depend on the type of dairy product.

  20. Dihedral angle principal component analysis of molecular dynamics simulations.

    PubMed

    Altis, Alexandros; Nguyen, Phuong H; Hegger, Rainer; Stock, Gerhard

    2007-06-28

    It has recently been suggested by Mu et al. [Proteins 58, 45 (2005)] to use backbone dihedral angles instead of Cartesian coordinates in a principal component analysis of molecular dynamics simulations. Dihedral angles may be advantageous because internal coordinates naturally provide a correct separation of internal and overall motion, which was found to be essential for the construction and interpretation of the free energy landscape of a biomolecule undergoing large structural rearrangements. To account for the circular statistics of angular variables, a transformation from the space of dihedral angles {phi(n)} to the metric coordinate space {x(n)=cos phi(n),y(n)=sin phi(n)} was employed. To study the validity and the applicability of the approach, in this work the theoretical foundations underlying the dihedral angle principal component analysis (dPCA) are discussed. It is shown that the dPCA amounts to a one-to-one representation of the original angle distribution and that its principal components can readily be characterized by the corresponding conformational changes of the peptide. Furthermore, a complex version of the dPCA is introduced, in which N angular variables naturally lead to N eigenvalues and eigenvectors. Applying the methodology to the construction of the free energy landscape of decaalanine from a 300 ns molecular dynamics simulation, a critical comparison of the various methods is given.

  1. Dihedral angle principal component analysis of molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Altis, Alexandros; Nguyen, Phuong H.; Hegger, Rainer; Stock, Gerhard

    2007-06-01

    It has recently been suggested by Mu et al. [Proteins 58, 45 (2005)] to use backbone dihedral angles instead of Cartesian coordinates in a principal component analysis of molecular dynamics simulations. Dihedral angles may be advantageous because internal coordinates naturally provide a correct separation of internal and overall motion, which was found to be essential for the construction and interpretation of the free energy landscape of a biomolecule undergoing large structural rearrangements. To account for the circular statistics of angular variables, a transformation from the space of dihedral angles {φn} to the metric coordinate space {xn=cosφn,yn=sinφn} was employed. To study the validity and the applicability of the approach, in this work the theoretical foundations underlying the dihedral angle principal component analysis (dPCA) are discussed. It is shown that the dPCA amounts to a one-to-one representation of the original angle distribution and that its principal components can readily be characterized by the corresponding conformational changes of the peptide. Furthermore, a complex version of the dPCA is introduced, in which N angular variables naturally lead to N eigenvalues and eigenvectors. Applying the methodology to the construction of the free energy landscape of decaalanine from a 300ns molecular dynamics simulation, a critical comparison of the various methods is given.

  2. Performance evaluation of PCA-based spike sorting algorithms.

    PubMed

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.

  3. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

  4. State-Space Estimation of Soil Organic Carbon Stock

    NASA Astrophysics Data System (ADS)

    Ogunwole, Joshua O.; Timm, Luis C.; Obidike-Ugwu, Evelyn O.; Gabriels, Donald M.

    2014-04-01

    Understanding soil spatial variability and identifying soil parameters most determinant to soil organic carbon stock is pivotal to precision in ecological modelling, prediction, estimation and management of soil within a landscape. This study investigates and describes field soil variability and its structural pattern for agricultural management decisions. The main aim was to relate variation in soil organic carbon stock to soil properties and to estimate soil organic carbon stock from the soil properties. A transect sampling of 100 points at 3 m intervals was carried out. Soils were sampled and analyzed for soil organic carbon and other selected soil properties along with determination of dry aggregate and water-stable aggregate fractions. Principal component analysis, geostatistics, and state-space analysis were conducted on the analyzed soil properties. The first three principal components explained 53.2% of the total variation; Principal Component 1 was dominated by soil exchange complex and dry sieved macroaggregates clusters. Exponential semivariogram model described the structure of soil organic carbon stock with a strong dependence indicating that soil organic carbon values were correlated up to 10.8m.Neighbouring values of soil organic carbon stock, all waterstable aggregate fractions, and dithionite and pyrophosphate iron gave reliable estimate of soil organic carbon stock by state-space.

  5. A Graphical Approach to the Standard Principal-Agent Model.

    ERIC Educational Resources Information Center

    Zhou, Xianming

    2002-01-01

    States the principal-agent theory is difficult to teach because of its technical complexity and intractability. Indicates the equilibrium in the contract space is defined by the incentive parameter and insurance component of pay under a linear contract. Describes a graphical approach that students with basic knowledge of algebra and…

  6. Psychometric characteristics of the Mobility Inventory in a longitudinal study of anxiety disorders: Replicating and exploring a three component solution

    PubMed Central

    Rodriguez, Benjamin F.; Pagano, Maria E.; Keller, Martin B.

    2008-01-01

    Psychometric characteristics of the Mobility Inventory (MI) were examined in 216 outpatients diagnosed with panic disorder with agoraphobia participating in a longitudinal study of anxiety disorders. An exploratory principal components analysis replicated a three-component solution for the MI reported in prior studies, with components corresponding to avoidance of public spaces, avoidance of enclosed spaces, and avoidance of open spaces. Correlational analyses suggested that the components tap unique but related areas of avoidance that were remarkably stable across periods of 1,3, and 5 years between administrations. Implications of these results for future studies of agoraphobia are discussed. PMID:17079112

  7. NASA Facts. An Educational Publication of the National Aeronautics and Space Administration: Space Shuttle

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The versatility of space shuttle, its heat shieldings, principal components, and facilities for various operations are described as well as the accomodations for the spacecrew and experiments. The capabilities of an improved space suit and a personal rescue enclosure containing life support and communication systems are highlighted. A typical mission is described.

  8. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  9. Conformational states and folding pathways of peptides revealed by principal-independent component analyses.

    PubMed

    Nguyen, Phuong H

    2007-05-15

    Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.

  10. Determination of the rotational diffusion tensor of macromolecules in solution from nmr relaxation data with a combination of exact and approximate methods--application to the determination of interdomain orientation in multidomain proteins.

    PubMed

    Ghose, R; Fushman, D; Cowburn, D

    2001-04-01

    In this paper we present a method for determining the rotational diffusion tensor from NMR relaxation data using a combination of approximate and exact methods. The approximate method, which is computationally less intensive, computes values of the principal components of the diffusion tensor and estimates the Euler angles, which relate the principal axis frame of the diffusion tensor to the molecular frame. The approximate values of the principal components are then used as starting points for an exact calculation by a downhill simplex search for the principal components of the tensor over a grid of the space of Euler angles relating the diffusion tensor frame to the molecular frame. The search space of Euler angles is restricted using the tensor orientations calculated using the approximate method. The utility of this approach is demonstrated using both simulated and experimental relaxation data. A quality factor that determines the extent of the agreement between the measured and predicted relaxation data is provided. This approach is then used to estimate the relative orientation of SH3 and SH2 domains in the SH(32) dual-domain construct of Abelson kinase complexed with a consolidated ligand. Copyright 2001 Academic Press.

  11. Determination of the Rotational Diffusion Tensor of Macromolecules in Solution from NMR Relaxation Data with a Combination of Exact and Approximate Methods—Application to the Determination of Interdomain Orientation in Multidomain Proteins

    NASA Astrophysics Data System (ADS)

    Ghose, Ranajeet; Fushman, David; Cowburn, David

    2001-04-01

    In this paper we present a method for determining the rotational diffusion tensor from NMR relaxation data using a combination of approximate and exact methods. The approximate method, which is computationally less intensive, computes values of the principal components of the diffusion tensor and estimates the Euler angles, which relate the principal axis frame of the diffusion tensor to the molecular frame. The approximate values of the principal components are then used as starting points for an exact calculation by a downhill simplex search for the principal components of the tensor over a grid of the space of Euler angles relating the diffusion tensor frame to the molecular frame. The search space of Euler angles is restricted using the tensor orientations calculated using the approximate method. The utility of this approach is demonstrated using both simulated and experimental relaxation data. A quality factor that determines the extent of the agreement between the measured and predicted relaxation data is provided. This approach is then used to estimate the relative orientation of SH3 and SH2 domains in the SH(32) dual-domain construct of Abelson kinase complexed with a consolidated ligand.

  12. From measurements to metrics: PCA-based indicators of cyber anomaly

    NASA Astrophysics Data System (ADS)

    Ahmed, Farid; Johnson, Tommy; Tsui, Sonia

    2012-06-01

    We present a framework of the application of Principal Component Analysis (PCA) to automatically obtain meaningful metrics from intrusion detection measurements. In particular, we report the progress made in applying PCA to analyze the behavioral measurements of malware and provide some preliminary results in selecting dominant attributes from an arbitrary number of malware attributes. The results will be useful in formulating an optimal detection threshold in the principal component space, which can both validate and augment existing malware classifiers.

  13. Polyhedral gamut representation of natural objects based on spectral reflectance database and its application

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Sakuda, Yasunori; Honda, Toshio

    2002-06-01

    Spectral reflectance of most reflective objects such as natural objects and color hardcopy is relatively smooth and can be approximated by several numbers of principal components with high accuracy. Though the subspace spanned by those principal components represents a space in which reflective objects can exist, it dos not provide the bound in which the samples distribute. In this paper we propose to represent the gamut of reflective objects in more distinct form, i.e., as a polyhedron in the subspace spanned by several principal components. Concept of the polyhedral gamut representation and its application to calculation of metamer ensemble are described. Color-mismatch volume caused by different illuminant and/or observer for a metamer ensemble is also calculated and compared with theoretical one.

  14. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  15. A Model for Undergraduate and High School Student Research in Earth and Space Sciences: The New York City Research Initiative

    NASA Astrophysics Data System (ADS)

    Scalzo, F.; Johnson, L.; Marchese, P.

    2006-05-01

    The New York City Research Initiative (NYCRI) is a research and academic program that involves high school students, undergraduate and graduate students, and high school teachers in research teams that are led by college/university principal investigators of NASA funded projects and/or NASA scientists. The principal investigators are at 12 colleges/universities within a 50-mile radius of New York City (NYC and surrounding counties, Southern Connecticut and Northern New Jersey), as well as the NASA Goddard Institute of Space Studies (GISS). This program has a summer research institute component in Earth Science and Space Science, and an academic year component that includes the formulation and implementation NASA research based learning units in existing STEM courses by high school and college faculty. NYCRI is a revision and expansion of the Institute on Climate and Planets at GISS and is funded by NASA MURED and the Goddard Space Flight Center's Education Office.

  16. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  17. Enzyme Amplified Detection of Microbial Cell Wall Components

    NASA Technical Reports Server (NTRS)

    Wainwright, Norman R.

    2004-01-01

    This proposal is MBL's portion of NASA's Johnson Space Center's Astrobiology Center led by Principal Investigator, Dr. David McKay, entitled: 'Institute for the Study of Biomarkers in Astromaterials.' Dr. Norman Wainwright is the principal investigator at MBL and is responsible for developing methods to detect trace quantities of microbial cell wall chemicals using the enzyme amplification system of Limulus polyphemus and other related methods.

  18. Standardized principal components for vegetation variability monitoring across space and time

    NASA Astrophysics Data System (ADS)

    Mathew, T. R.; Vohora, V. K.

    2016-08-01

    Vegetation at any given location changes through time and in space. In what quantity it changes, where and when can help us in identifying sources of ecosystem stress, which is very useful for understanding changes in biodiversity and its effect on climate change. Such changes known for a region are important in prioritizing management. The present study considers the dynamics of savanna vegetation in Kruger National Park (KNP) through the use of temporal satellite remote sensing images. Spatial variability of vegetation is a key characteristic of savanna landscapes and its importance to biodiversity has been demonstrated by field-based studies. The data used for the study were sourced from the U.S. Agency for International Development where AVHRR derived Normalized Difference Vegetation Index (NDVI) images available at spatial resolutions of 8 km and at dekadal scales. The study area was extracted from these images for the time-period 1984-2002. Maximum value composites were derived for individual months resulting in an image dataset of 216 NDVI images. Vegetation dynamics across spatio-temporal domains were analyzed using standardized principal components analysis (SPCA) on the NDVI time-series. Each individual image variability in the time-series is considered. The outcome of this study demonstrated promising results - the variability of vegetation change in the area across space and time, and also indicated changes in landscape on 6 individual principal components (PCs) showing differences not only in magnitude, but also in pattern, of different selected eco-zones with constantly changing and evolving ecosystem.

  19. How Adequate are One- and Two-Dimensional Free Energy Landscapes for Protein Folding Dynamics?

    NASA Astrophysics Data System (ADS)

    Maisuradze, Gia G.; Liwo, Adam; Scheraga, Harold A.

    2009-06-01

    The molecular dynamics trajectories of protein folding or unfolding, generated with the coarse-grained united-residue force field for the B domain of staphylococcal protein A, were analyzed by principal component analysis (PCA). The folding or unfolding process was examined by using free-energy landscapes (FELs) in PC space. By introducing a novel multidimensional FEL, it was shown that the low-dimensional FELs are not always sufficient for the description of folding or unfolding processes. Similarities between the topographies of FELs along low- and high-indexed principal components were observed.

  20. InterFace: A software package for face image warping, averaging, and principal components analysis.

    PubMed

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  1. Spectroscopic and Chemometric Analysis of Binary and Ternary Edible Oil Mixtures: Qualitative and Quantitative Study.

    PubMed

    Jović, Ozren; Smolić, Tomislav; Primožič, Ines; Hrenar, Tomica

    2016-04-19

    The aim of this study was to investigate the feasibility of FTIR-ATR spectroscopy coupled with the multivariate numerical methodology for qualitative and quantitative analysis of binary and ternary edible oil mixtures. Four pure oils (extra virgin olive oil, high oleic sunflower oil, rapeseed oil, and sunflower oil), as well as their 54 binary and 108 ternary mixtures, were analyzed using FTIR-ATR spectroscopy in combination with principal component and discriminant analysis, partial least-squares, and principal component regression. It was found that the composition of all 166 samples can be excellently represented using only the first three principal components describing 98.29% of total variance in the selected spectral range (3035-2989, 1170-1140, 1120-1100, 1093-1047, and 930-890 cm(-1)). Factor scores in 3D space spanned by these three principal components form a tetrahedral-like arrangement: pure oils being at the vertices, binary mixtures at the edges, and ternary mixtures on the faces of a tetrahedron. To confirm the validity of results, we applied several cross-validation methods. Quantitative analysis was performed by minimization of root-mean-square error of cross-validation values regarding the spectral range, derivative order, and choice of method (partial least-squares or principal component regression), which resulted in excellent predictions for test sets (R(2) > 0.99 in all cases). Additionally, experimentally more demanding gas chromatography analysis of fatty acid content was carried out for all specimens, confirming the results obtained by FTIR-ATR coupled with principal component analysis. However, FTIR-ATR provided a considerably better model for prediction of mixture composition than gas chromatography, especially for high oleic sunflower oil.

  2. Space weathering trends on carbonaceous asteroids: A possible explanation for Bennu's blue slope?

    NASA Astrophysics Data System (ADS)

    Lantz, C.; Binzel, R. P.; DeMeo, F. E.

    2018-03-01

    We compare primitive near-Earth asteroid spectral properties to the irradiated carbonaceous chondrite samples of Lantz et al. (2017) in order to assess how space weathering processes might influence taxonomic classification. Using the same eigenvectors from the asteroid taxonomy by DeMeo et al. (2009), we calculate the principal components for fresh and irradiated meteorites and find that change in spectral slope (blueing or reddening) causes a corresponding shift in the two first principal components along the same line that the C- and X-complexes track. Using a sample of B-, C-, X-, and D-type NEOs with visible and near-infrared spectral data, we further investigated the correlation between prinicipal components and the spectral curvature for the primitive asteroids. We find that space weathering effects are not just slope and albedo, but also include spectral curvature. We show how, through space weathering, surfaces having an original "C-type" reflectance can thus turn into a redder P-type or a bluer B-type, and that space weathering can also decrease (and disguise) the D-type population. Finally we take a look at the case of OSIRIS-REx target (101955) Bennu and propose an explanation for the blue and possibly red spectra that were previously observed on different locations of its surface: parts of Bennu's surface could have become blue due to space weathering, while fresher areas are redder. No clear prediction can be made on Hayabusa-2 target (162173) Ryugu.

  3. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

    PubMed

    Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

    2018-06-01

    The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Multivariate approach to quantitative analysis of Aphis gossypii Glover (Hemiptera: Aphididae) and their natural enemy populations at different cotton spacings.

    PubMed

    Malaquias, José B; Ramalho, Francisco S; Dos S Dias, Carlos T; Brugger, Bruno P; S Lira, Aline Cristina; Wilcken, Carlos F; Pachú, Jéssica K S; Zanuncio, José C

    2017-02-09

    The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied.

  5. Multivariate approach to quantitative analysis of Aphis gossypii Glover (Hemiptera: Aphididae) and their natural enemy populations at different cotton spacings

    PubMed Central

    Malaquias, José B.; Ramalho, Francisco S.; dos S. Dias, Carlos T.; Brugger, Bruno P.; S. Lira, Aline Cristina; Wilcken, Carlos F.; Pachú, Jéssica K. S.; Zanuncio, José C.

    2017-01-01

    The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied. PMID:28181503

  6. Multivariate approach to quantitative analysis of Aphis gossypii Glover (Hemiptera: Aphididae) and their natural enemy populations at different cotton spacings

    NASA Astrophysics Data System (ADS)

    Malaquias, José B.; Ramalho, Francisco S.; Dos S. Dias, Carlos T.; Brugger, Bruno P.; S. Lira, Aline Cristina; Wilcken, Carlos F.; Pachú, Jéssica K. S.; Zanuncio, José C.

    2017-02-01

    The relationship between pests and natural enemies using multivariate analysis on cotton in different spacing has not been documented yet. Using multivariate approaches is possible to optimize strategies to control Aphis gossypii at different crop spacings because the possibility of a better use of the aphid sampling strategies as well as the conservation and release of its natural enemies. The aims of the study were (i) to characterize the temporal abundance data of aphids and its natural enemies using principal components, (ii) to analyze the degree of correlation between the insects and between groups of variables (pests and natural enemies), (iii) to identify the main natural enemies responsible for regulating A. gossypii populations, and (iv) to investigate the similarities in arthropod occurrence patterns at different spacings of cotton crops over two seasons. High correlations in the occurrence of Scymnus rubicundus with aphids are shown through principal component analysis and through the important role the species plays in canonical correlation analysis. Clustering the presence of apterous aphids matches the pattern verified for Chrysoperla externa at the three different spacings between rows. Our results indicate that S. rubicundus is the main candidate to regulate the aphid populations in all spacings studied.

  7. Evidence of tampering in watermark identification

    NASA Astrophysics Data System (ADS)

    McLauchlan, Lifford; Mehrübeoglu, Mehrübe

    2009-08-01

    In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.

  8. Determination of the chemical parameters and manufacturer of divins from their broadband transmission spectra

    NASA Astrophysics Data System (ADS)

    Khodasevich, M. A.; Sinitsyn, G. V.; Skorbanova, E. A.; Rogovaya, M. V.; Kambur, E. I.; Aseev, V. A.

    2016-06-01

    Analysis of multiparametric data on transmission spectra of 24 divins (Moldovan cognacs) in the 190-2600 nm range allows identification of outliers and their removal from a sample under study in the following consideration. The principal component analysis and classification tree with a single-rank predictor constructed in the 2D space of principal components allow classification of divin manufacturers. It is shown that the accuracy of syringaldehyde, ethyl acetate, vanillin, and gallic acid concentrations in divins calculated with the regression to latent structures depends on the sample volume and is 3, 6, 16, and 20%, respectively, which is acceptable for the application.

  9. KSC-2011-7879

    NASA Image and Video Library

    2011-11-22

    CAPE CANAVERAL, Fla. – NASA’s Kennedy Space Center in Florida is host to a Mars Science Laboratory (MSL) science briefing as part of preflight activities for the MSL mission. From left, NASA Public Affairs Officer Guy Webster moderates the conference featuring Michael Meyer, lead scientist for NASA Mars Exploration Program; John Grotzinger, project scientist for Mars Science Laboratory California Institute of Technology, Pasadena, Calif.; Michael Malin, principal investigator for the Mast Camera and Mars Descent Imager investigations on Curiosity, Malin Space Science Systems; Roger Wiens, principal investigator for Chemistry and Camera investigation on Curiosity, Los Alamos National Laboratory; David Blake, NASA principal investigator for Chemistry and Mineralogy investigation on Curiosity, NASA Ames Research Center; and Paul Mahaffy, NASA principal investigator for Sample Analysis at Mars investigation on Curiosity, NASA Goddard Space Flight Center. MSL’s components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station in Florida. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Kim Shiflett

  10. KSC-2011-7878

    NASA Image and Video Library

    2011-11-22

    CAPE CANAVERAL, Fla. – NASA’s Kennedy Space Center in Florida is host to a Mars Science Laboratory (MSL) science briefing as part of preflight activities for the MSL mission. From left, NASA Public Affairs Officer Guy Webster moderates the conference featuring Michael Meyer, lead scientist for NASA Mars Exploration Program; John Grotzinger, project scientist for Mars Science Laboratory California Institute of Technology, Pasadena, Calif.; Michael Malin, principal investigator for the Mast Camera and Mars Descent Imager investigations on Curiosity, Malin Space Science Systems; Roger Wiens, principal investigator for Chemistry and Camera investigation on Curiosity, Los Alamos National Laboratory; David Blake, NASA principal investigator for Chemistry and Mineralogy investigation on Curiosity, NASA Ames Research Center; and Paul Mahaffy, NASA principal investigator for Sample Analysis at Mars investigation on Curiosity, NASA Goddard Space Flight Center. MSL’s components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station in Florida. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Kim Shiflett

  11. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.

    PubMed

    Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.

  12. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run

    PubMed Central

    Armeanu, Daniel; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100

  13. Face-space architectures: evidence for the use of independent color-based features.

    PubMed

    Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2013-07-01

    The concept of psychological face space lies at the core of many theories of face recognition and representation. To date, much of the understanding of face space has been based on principal component analysis (PCA); the structure of the psychological space is thought to reflect some important aspects of a physical face space characterized by PCA applications to face images. In the present experiments, we investigated alternative accounts of face space and found that independent component analysis provided the best fit to human judgments of face similarity and identification. Thus, our results challenge an influential approach to the study of human face space and provide evidence for the role of statistically independent features in face encoding. In addition, our findings support the use of color information in the representation of facial identity, and we thus argue for the inclusion of such information in theoretical and computational constructs of face space.

  14. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    DTIC Science & Technology

    2013-09-01

    Ground testing of prototype hardware and processing algorithms for a Wide Area Space Surveillance System (WASSS) Neil Goldstein, Rainer A...at Magdalena Ridge Observatory using the prototype Wide Area Space Surveillance System (WASSS) camera, which has a 4 x 60 field-of-view , < 0.05...objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and a Principal Component Analysis based image

  15. Developing closed life support systems for large space habitats

    NASA Technical Reports Server (NTRS)

    Phillips, J. M.; Harlan, A. D.; Krumhar, K. C.

    1978-01-01

    In anticipation of possible large-scale, long-duration space missions which may be conducted in the future, NASA has begun to investigate the research and technology development requirements to create life support systems for large space habitats. An analysis suggests the feasibility of a regeneration of food in missions which exceed four years duration. Regeneration of food in space may be justified for missions of shorter duration when large crews must be supported at remote sites such as lunar bases and space manufacturing facilities. It is thought that biological components consisting principally of traditional crop and livestock species will prove to be the most acceptable means of closing the food cycle. A description is presented of the preliminary results of a study of potential biological components for large space habitats. Attention is given to controlled ecosystems, Russian life support system research, controlled-environment agriculture, and the social aspects of the life-support system.

  16. Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components

    PubMed Central

    Wang, Min; Kornblau, Steven M; Coombes, Kevin R

    2018-01-01

    Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable. PMID:29881252

  17. Efficient principal component analysis for multivariate 3D voxel-based mapping of brain functional imaging data sets as applied to FDG-PET and normal aging.

    PubMed

    Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude

    2003-01-01

    Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.

  18. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    PubMed

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  19. Total Electron Content forecast model over Australia

    NASA Astrophysics Data System (ADS)

    Bouya, Zahra; Terkildsen, Michael; Francis, Matthew

    Ionospheric perturbations can cause serious propagation errors in modern radio systems such as Global Navigation Satellite Systems (GNSS). Forecasting ionospheric parameters is helpful to estimate potential degradation of the performance of these systems. Our purpose is to establish an Australian Regional Total Electron Content (TEC) forecast model at IPS. In this work we present an approach based on the combined use of the Principal Component Analysis (PCA) and Artificial Neural Network (ANN) to predict future TEC values. PCA is used to reduce the dimensionality of the original TEC data by mapping it into its eigen-space. In this process the top- 5 eigenvectors are chosen to reflect the directions of the maximum variability. An ANN approach was then used for the multicomponent prediction. We outline the design of the ANN model with its parameters. A number of activation functions along with different spectral ranges and different numbers of Principal Components (PCs) were tested to find the PCA-ANN models reaching the best results. Keywords: GNSS, Space Weather, Regional, Forecast, PCA, ANN.

  20. Grey Relational Analysis Coupled with Principal Component Analysis for Optimization of Stereolithography Process to Enhance Part Quality

    NASA Astrophysics Data System (ADS)

    Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.

    2017-08-01

    The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.

  1. Application of diet-derived taste active components for clinical nutrition: perspectives from ancient Ayurvedic medical science, space medicine, and modern clinical nutrition.

    PubMed

    Kulkarni, Anil D; Sundaresan, Alamelu; Rashid, Muhammad J; Yamamoto, Shigeru; Karkow, Francisco

    2014-01-01

    The principal objective of this paper is to demonstrate the role of taste and flavor in health from the ancient science of Ayurveda to modern medicine; specifically their mechanisms and roles in space medicine and their clinical relevance in modern heath care. It also describes the brief history of the use of the monosodium glutamate or flavor enhancers ("Umami substance") that improve the quality of food intake by stimulating chemosensory perception. In addition, the dietary nucleotides are known to be the components of "Umami substance" and the benefit of their use has been proposed in various types of patients with cancer, radiation therapy, organ transplantation, and for application in space medicine.

  2. Adaptive momentum management for the dual keel Space Station

    NASA Technical Reports Server (NTRS)

    Hopkins, M.; Hahn, E.

    1987-01-01

    The report discusses momentum management for a large space structure with the structure selected configuration being the Initial Orbital Configuration of the dual-keel Space Station. The external torques considered were gravity gradient and aerodynamic torques. The goal of the momentum management scheme developed is to remove the bias components of the external torques and center the cyclic components of the stored angular momentum. The scheme investigated is adaptive to uncertainties of the inertia tensor and requires only approximate knowledge of principal moments of inertia. Computational requirements are minimal and should present no implementation problem in a flight-type computer. The method proposed is shown to be effective in the presence of attitude control bandwidths as low as 0.01 radian/sec.

  3. Derivation of Boundary Manikins: A Principal Component Analysis

    NASA Technical Reports Server (NTRS)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  4. Water reuse systems: A review of the principal components

    USGS Publications Warehouse

    Lucchetti, G.; Gray, G.A.

    1988-01-01

    Principal components of water reuse systems include ammonia removal, disease control, temperature control, aeration, and particulate filtration. Effective ammonia removal techniques include air stripping, ion exchange, and biofiltration. Selection of a particular technique largely depends on site-specific requirements (e.g., space, existing water quality, and fish densities). Disease control, although often overlooked, is a major problem in reuse systems. Pathogens can be controlled most effectively with ultraviolet radiation, ozone, or chlorine. Simple and inexpensive methods are available to increase oxygen concentration and eliminate gas supersaturation, these include commercial aerators, air injectors, and packed columns. Temperature control is a major advantage of reuse systems, but the equipment required can be expensive, particularly if water temperature must be rigidly controlled and ambient air temperature fluctuates. Filtration can be readily accomplished with a hydrocyclone or sand filter that increases overall system efficiency. Based on criteria of adaptability, efficiency, and reasonable cost, we recommend components for a small water reuse system.

  5. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  6. Linkage Analysis of Urine Arsenic Species Patterns in the Strong Heart Family Study

    PubMed Central

    Gribble, Matthew O.; Voruganti, Venkata Saroja; Cole, Shelley A.; Haack, Karin; Balakrishnan, Poojitha; Laston, Sandra L.; Tellez-Plaza, Maria; Francesconi, Kevin A.; Goessler, Walter; Umans, Jason G.; Thomas, Duncan C.; Gilliland, Frank; North, Kari E.; Franceschini, Nora; Navas-Acien, Ana

    2015-01-01

    Arsenic toxicokinetics are important for disease risks in exposed populations, but genetic determinants are not fully understood. We examined urine arsenic species patterns measured by HPLC-ICPMS among 2189 Strong Heart Study participants 18 years of age and older with data on ∼400 genome-wide microsatellite markers spaced ∼10 cM and arsenic speciation (683 participants from Arizona, 684 from Oklahoma, and 822 from North and South Dakota). We logit-transformed % arsenic species (% inorganic arsenic, %MMA, and %DMA) and also conducted principal component analyses of the logit % arsenic species. We used inverse-normalized residuals from multivariable-adjusted polygenic heritability analysis for multipoint variance components linkage analysis. We also examined the contribution of polymorphisms in the arsenic metabolism gene AS3MT via conditional linkage analysis. We localized a quantitative trait locus (QTL) on chromosome 10 (LOD 4.12 for %MMA, 4.65 for %DMA, and 4.84 for the first principal component of logit % arsenic species). This peak was partially but not fully explained by measured AS3MT variants. We also localized a QTL for the second principal component of logit % arsenic species on chromosome 5 (LOD 4.21) that was not evident from considering % arsenic species individually. Some other loci were suggestive or significant for 1 geographical area but not overall across all areas, indicating possible locus heterogeneity. This genome-wide linkage scan suggests genetic determinants of arsenic toxicokinetics to be identified by future fine-mapping, and illustrates the utility of principal component analysis as a novel approach that considers % arsenic species jointly. PMID:26209557

  7. Application of time series analysis on molecular dynamics simulations of proteins: a study of different conformational spaces by principal component analysis.

    PubMed

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C

    2004-09-08

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics

  8. Application of time series analysis on molecular dynamics simulations of proteins: A study of different conformational spaces by principal component analysis

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C.

    2004-09-01

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of α-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Cα coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of α-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of α-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins.

  9. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  10. New insights into the folding of a β-sheet miniprotein in a reduced space of collective hydrogen bond variables: application to a hydrodynamic analysis of the folding flow.

    PubMed

    Kalgin, Igor V; Caflisch, Amedeo; Chekmarev, Sergei F; Karplus, Martin

    2013-05-23

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted principal component analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed, and it is complemented by a hydrodynamic analysis. The latter, making use of "passive tracers" in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this "hydrogen atom" of protein folding.

  11. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  12. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  13. Automated Analysis, Classification, and Display of Waveforms

    NASA Technical Reports Server (NTRS)

    Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff

    2004-01-01

    A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.

  14. Understanding the pattern of the BSE Sensex

    NASA Astrophysics Data System (ADS)

    Mukherjee, I.; Chatterjee, Soumya; Giri, A.; Barat, P.

    2017-09-01

    An attempt is made to understand the pattern of behaviour of the BSE Sensex by analysing the tick-by-tick Sensex data for the years 2006 to 2012 on yearly as well as cumulative basis using Principal Component Analysis (PCA) and its nonlinear variant Kernel Principal Component Analysis (KPCA). The latter technique ensures that the nonlinear character of the interactions present in the system gets captured in the analysis. The analysis is carried out by constructing vector spaces of varying dimensions. The size of the data set ranges from a minimum of 360,000 for one year to a maximum of 2,520,000 for seven years. In all cases the prices appear to be highly correlated and restricted to a very low dimensional subspace of the original vector space. An external perturbation is added to the system in the form of noise. It is observed that while standard PCA is unable to distinguish the behaviour of the noise-mixed data from that of the original, KPCA clearly identifies the effect of the noise. The exercise is extended in case of daily data of other stock markets and similar results are obtained.

  15. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  16. A metric space for Type Ia supernova spectra: a new method to assess explosion scenarios

    NASA Astrophysics Data System (ADS)

    Sasdelli, Michele; Hillebrandt, W.; Kromer, M.; Ishida, E. E. O.; Röpke, F. K.; Sim, S. A.; Pakmor, R.; Seitenzahl, I. R.; Fink, M.

    2017-04-01

    Over the past years, Type Ia supernovae (SNe Ia) have become a major tool to determine the expansion history of the Universe, and considerable attention has been given to, both, observations and models of these events. However, until now, their progenitors are not known. The observed diversity of light curves and spectra seems to point at different progenitor channels and explosion mechanisms. Here, we present a new way to compare model predictions with observations in a systematic way. Our method is based on the construction of a metric space for SN Ia spectra by means of linear principal component analysis, taking care of missing and/or noisy data, and making use of partial least-squares regression to find correlations between spectral properties and photometric data. We investigate realizations of the three major classes of explosion models that are presently discussed: delayed-detonation Chandrasekhar-mass explosions, sub-Chandrasekhar-mass detonations and double-degenerate mergers, and compare them with data. We show that in the principal component space, all scenarios have observed counterparts, supporting the idea that different progenitors are likely. However, all classes of models face problems in reproducing the observed correlations between spectral properties and light curves and colours. Possible reasons are briefly discussed.

  17. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with the telescope operators and with thanks to the support staff and day crew.

  18. Modified neural networks for rapid recovery of tokamak plasma parameters for real time control

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Ranjan, P.

    2002-07-01

    Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.

  19. Extracting Topological Relations Between Indoor Spaces from Point Clouds

    NASA Astrophysics Data System (ADS)

    Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L.

    2017-09-01

    3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.

  20. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  1. Discrimination of a chestnut-oak forest unit for geologic mapping by means of a principal component enhancement of Landsat multispectral scanner data.

    USGS Publications Warehouse

    Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.

    1981-01-01

    A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors

  2. A first application of independent component analysis to extracting structure from stock returns.

    PubMed

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  3. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  4. Increasing the Coverage of Medicinal Chemistry-Relevant Space in Commercial Fragments Screening

    PubMed Central

    2014-01-01

    Analyzing the chemical space coverage in commercial fragment screening collections revealed the overlap between bioactive medicinal chemistry substructures and rule-of-three compliant fragments is only ∼25%. We recommend including these fragments in fragment screening libraries to maximize confidence in discovering hit matter within known bioactive chemical space, while incorporation of nonoverlapping substructures could offer novel hits in screening libraries. Using principal component analysis, polar and three-dimensional substructures display a higher-than-average enrichment of bioactive compounds, indicating increasing representation of these substructures may be beneficial in fragment screening. PMID:24405118

  5. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  6. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  7. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  8. Hot spot of structural ambivalence in prion protein revealed by secondary structure principal component analysis.

    PubMed

    Yamamoto, Norifumi

    2014-08-21

    The conformational conversion of proteins into an aggregation-prone form is a common feature of various neurodegenerative disorders including Alzheimer's, Huntington's, Parkinson's, and prion diseases. In the early stage of prion diseases, secondary structure conversion in prion protein (PrP) causing β-sheet expansion facilitates the formation of a pathogenic isoform with a high content of β-sheets and strong aggregation tendency to form amyloid fibrils. Herein, we propose a straightforward method to extract essential information regarding the secondary structure conversion of proteins from molecular simulations, named secondary structure principal component analysis (SSPCA). The definite existence of a PrP isoform with an increased β-sheet structure was confirmed in a free-energy landscape constructed by mapping protein structural data into a reduced space according to the principal components determined by the SSPCA. We suggest a "spot" of structural ambivalence in PrP-the C-terminal part of helix 2-that lacks a strong intrinsic secondary structure, thus promoting a partial α-helix-to-β-sheet conversion. This result is important to understand how the pathogenic conformational conversion of PrP is initiated in prion diseases. The SSPCA has great potential to solve various challenges in studying highly flexible molecular systems, such as intrinsically disordered proteins, structurally ambivalent peptides, and chameleon sequences.

  9. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    PubMed Central

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979

  10. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    PubMed

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  11. On the Fallibility of Principal Components in Research

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2017-01-01

    The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

  12. New Insights into the Folding of a β-Sheet Miniprotein in a Reduced Space of Collective Hydrogen Bond Variables: Application to a Hydrodynamic Analysis of the Folding Flow

    PubMed Central

    Kalgin, Igor V.; Caflisch, Amedeo; Chekmarev, Sergei F.; Karplus, Martin

    2013-01-01

    A new analysis of the 20 μs equilibrium folding/unfolding molecular dynamics simulations of the three-stranded antiparallel β-sheet miniprotein (beta3s) in implicit solvent is presented. The conformation space is reduced in dimensionality by introduction of linear combinations of hydrogen bond distances as the collective variables making use of a specially adapted Principal Component Analysis (PCA); i.e., to make structured conformations more pronounced, only the formed bonds are included in determining the principal components. It is shown that a three-dimensional (3D) subspace gives a meaningful representation of the folding behavior. The first component, to which eight native hydrogen bonds make the major contribution (four in each beta hairpin), is found to play the role of the reaction coordinate for the overall folding process, while the second and third components distinguish the structured conformations. The representative points of the trajectory in the 3D space are grouped into conformational clusters that correspond to locally stable conformations of beta3s identified in earlier work. A simplified kinetic network based on the three components is constructed and it is complemented by a hydrodynamic analysis. The latter, making use of “passive tracers” in 3D space, indicates that the folding flow is much more complex than suggested by the kinetic network. A 2D representation of streamlines shows there are vortices which correspond to repeated local rearrangement, not only around minima of the free energy surface, but also in flat regions between minima. The vortices revealed by the hydrodynamic analysis are apparently not evident in folding pathways generated by transition-path sampling. Making use of the fact that the values of the collective hydrogen bond variables are linearly related to the Cartesian coordinate space, the RMSD between clusters is determined. Interestingly, the transition rates show an approximate exponential correlation with distance in the hydrogen bond subspace. Comparison with the many published studies shows good agreement with the present analysis for the parts that can be compared, supporting the robust character of our understanding of this “hydrogen atom” of protein folding. PMID:23621790

  13. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  14. A Data Analytics Approach to Discovering Unique Microstructural Configurations Susceptible to Fatigue

    NASA Astrophysics Data System (ADS)

    Jha, S. K.; Brockman, R. A.; Hoffman, R. M.; Sinha, V.; Pilchak, A. L.; Porter, W. J.; Buchanan, D. J.; Larsen, J. M.; John, R.

    2018-05-01

    Principal component analysis and fuzzy c-means clustering algorithms were applied to slip-induced strain and geometric metric data in an attempt to discover unique microstructural configurations and their frequencies of occurrence in statistically representative instantiations of a titanium alloy microstructure. Grain-averaged fatigue indicator parameters were calculated for the same instantiation. The fatigue indicator parameters strongly correlated with the spatial location of the microstructural configurations in the principal components space. The fuzzy c-means clustering method identified clusters of data that varied in terms of their average fatigue indicator parameters. Furthermore, the number of points in each cluster was inversely correlated to the average fatigue indicator parameter. This analysis demonstrates that data-driven methods have significant potential for providing unbiased determination of unique microstructural configurations and their frequencies of occurrence in a given volume from the point of view of strain localization and fatigue crack initiation.

  15. Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate

    PubMed Central

    Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.

    2009-01-01

    Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786

  16. How many atoms are required to characterize accurately trajectory fluctuations of a protein?

    NASA Astrophysics Data System (ADS)

    Cukier, Robert I.

    2010-06-01

    Large molecules, whose thermal fluctuations sample a complex energy landscape, exhibit motions on an extended range of space and time scales. Principal component analysis (PCA) is often used to extract dominant motions that in proteins are typically domain motions. These motions are captured in the large eigenvalue (leading) principal components. There is also information in the small eigenvalues, arising from approximate linear dependencies among the coordinates. These linear dependencies suggest that instead of using all the atom coordinates to represent a trajectory, it should be possible to use a reduced set of coordinates with little loss in the information captured by the large eigenvalue principal components. In this work, methods that can monitor the correlation (overlap) between a reduced set of atoms and any number of retained principal components are introduced. For application to trajectory data generated by simulations, where the overall translational and rotational motion needs to be eliminated before PCA is carried out, some difficulties with the overlap measures arise and methods are developed to overcome them. The overlap measures are evaluated for a trajectory generated by molecular dynamics for the protein adenylate kinase, which consists of a stable, core domain, and two more mobile domains, referred to as the LID domain and the AMP-binding domain. The use of reduced sets corresponding, for the smallest set, to one-eighth of the alpha carbon (CA) atoms relative to using all the CA atoms is shown to predict the dominant motions of adenylate kinase. The overlap between using all the CA atoms and all the backbone atoms is essentially unity for a sum over PCA modes that effectively capture the exact trajectory. A reduction to a few atoms (three in the LID and three in the AMP-binding domain) shows that at least the first principal component, characterizing a large part of the LID-binding and AMP-binding motion, is well described. Based on these results, the overlap criterion should be applicable as a guide to postulating and validating coarse-grained descriptions of generic biomolecular assemblies.

  17. Evaluating methods to visualize patterns of genetic differentiation on a landscape.

    PubMed

    House, Geoffrey L; Hahn, Matthew W

    2018-05-01

    With advances in sequencing technology, research in the field of landscape genetics can now be conducted at unprecedented spatial and genomic scales. This has been especially evident when using sequence data to visualize patterns of genetic differentiation across a landscape due to demographic history, including changes in migration. Two recent model-based visualization methods that can highlight unusual patterns of genetic differentiation across a landscape, SpaceMix and EEMS, are increasingly used. While SpaceMix's model can infer long-distance migration, EEMS' model is more sensitive to short-distance changes in genetic differentiation, and it is unclear how these differences may affect their results in various situations. Here, we compare SpaceMix and EEMS side by side using landscape genetics simulations representing different migration scenarios. While both methods excel when patterns of simulated migration closely match their underlying models, they can produce either un-intuitive or misleading results when the simulated migration patterns match their models less well, and this may be difficult to assess in empirical data sets. We also introduce unbundled principal components (un-PC), a fast, model-free method to visualize patterns of genetic differentiation by combining principal components analysis (PCA), which is already used in many landscape genetics studies, with the locations of sampled individuals. Un-PC has characteristics of both SpaceMix and EEMS and works well with simulated and empirical data. Finally, we introduce msLandscape, a collection of tools that streamline the creation of customizable landscape-scale simulations using the popular coalescent simulator ms and conversion of the simulated data for use with un-PC, SpaceMix and EEMS. © 2017 John Wiley & Sons Ltd.

  18. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  19. Polarization Ratio Determination with Two Identical Linearly Polarized Antennas

    DTIC Science & Technology

    2017-01-17

    Fourier transform analysis of 21 measurements with one of the antennas rotating about its axis a circular polarization ratio is derived which can be...deter- mined directly from a discrete Fourier transform (DFT) of (5). However, leakage between closely spaced DFT bins requires improving the... Fourier transform and a mechanical antenna rotation to separate the principal and opposite circular polarization components followed by a basis

  20. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  1. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  2. Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis

    PubMed Central

    Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori

    2016-01-01

    Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211

  3. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  4. The Butterflies of Principal Components: A Case of Ultrafine-Grained Polyphase Units

    NASA Astrophysics Data System (ADS)

    Rietmeijer, F. J. M.

    1996-03-01

    Dusts in the accretion regions of chondritic interplanetary dust particles [IDPs] consisted of three principal components: carbonaceous units [CUs], carbon-bearing chondritic units [GUs] and carbon-free silicate units [PUs]. Among others, differences among chondritic IDP morphologies and variable bulk C/Si ratios reflect variable mixtures of principal components. The spherical shapes of the initially amorphous principal components remain visible in many chondritic porous IDPs but fusion was documented for CUs, GUs and PUs. The PUs occur as coarse- and ultrafine-grained units that include so called GEMS. Spherical principal components preserved in an IDP as recognisable textural units have unique proporties with important implications for their petrological evolution from pre-accretion processing to protoplanet alteration and dynamic pyrometamorphism. Throughout their lifetime the units behaved as closed-systems without chemical exchange with other units. This behaviour is reflected in their mineralogies while the bulk compositions of principal components define the environments wherein they were formed.

  5. Benefit from NASA

    NASA Image and Video Library

    1997-03-05

    Scientists at Marshall Space Flight Center (MSFC) have been studying the properties of Aerogel for several years. Aerogel, the lightest solid known to man, has displayed a high quality for insulation. Because of its smoky countenance, it has yet to be used as an insulation on windows, but has been used in the space program on the rover Sojourner, and has been used as insulation in the walls of houses and in automobile engine compartments. As heat is applied to Aerogel, scientist Dr. David Noever of Space Sciences Laboratory, Principal Investigator of Aerogel, studies for its properties trying to uncover the secret to making Aerogel a clear substance. Once found, Aerogel will be a major component in the future of glass insulation.

  6. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    NASA Astrophysics Data System (ADS)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  7. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  8. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    PubMed

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  9. Figures of merit for present and future dark energy probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortonson, Michael J.; Huterer, Dragan; Hu, Wayne

    2010-09-15

    We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of {approx}10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis ofmore » principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1{<=}w{<=}1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of {approx}100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.« less

  10. Decoding and reconstructing color from responses in human visual cortex.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2009-11-04

    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

  11. Free-piston Stirling component test power converter test results and potential Stirling applications

    NASA Technical Reports Server (NTRS)

    Dochat, G. R.

    1992-01-01

    As the principal contractor to NASA-Lewis Research Center, Mechanical Technology Incorporated is under contract to develop free-piston Stirling power converters in the context of the competitive multiyear Space Stirling Technology Program. The first generation Stirling power converter, the component test power converter (CTPC) initiated cold end testing in 1991, with hot testing scheduled for summer of 1992. This paper reviews the test progress of the CTPC and discusses the potential of Stirling technology for various potential missions at given point designs of 250 watts, 2500 watts, and 25,000 watts.

  12. High Bypass Turbofan Component Development. Phase II. Fan Detail Design.

    DTIC Science & Technology

    1979-12-01

    Vane metal angles ........ ..................... ... 18 22 Vane conical airfoil sections ..... ............... ... 19 23 Principal blade stresses at...31.25 deg. The number of rotor airfoils is 20 while the stator has 42 vanes . The number of vanes and the vane - blade spacing were consequences of...effect of radius change are accounted for. Figure 16 shows the blade hub, mean, and tip conical airfoil sections in engine orientation. For

  13. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  14. Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen

    2017-09-01

    Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.

  15. The BepiColombo Laser Altimeter (BeLA) power converter module (PCM): Concept and characterisation.

    PubMed

    Rodrigo, J; Gasquet, E; Castro, J-M; Herranz, M; Lara, L-M; Muñoz, M; Simon, A; Behnke, T; Thomas, N

    2017-03-01

    This paper presents the principal considerations when designing DC-DC converters for space instruments, in particular for the power converter module as part of the first European space laser altimeter: "BepiColombo Laser Altimeter" on board the European Space Agency-Japan Aerospace Exploration Agency (JAXA) mission BepiColombo. The main factors which determine the design of the DC-DC modules in space applications are printed circuit board occupation, mass, DC-DC converter efficiency, and environmental-survivability constraints. Topics included in the appropriated DC-DC converter design flow are hereby described. The topology and technology for the primary and secondary stages, input filters, transformer design, and peripheral components are discussed. Component selection and design trade-offs are described. Grounding, load and line regulation, and secondary protection circuitry (under-voltage, over-voltage, and over-current) are then introduced. Lastly, test results and characterization of the final flight design are also presented. Testing of the inrush current, the regulated output start-up, and the switching function of the power supply indicate that these performances are fully compliant with the requirements.

  16. Gambling, games of skill and human ecology: a pilot study by a multidimensional analysis approach.

    PubMed

    Valera, Luca; Giuliani, Alessandro; Gizzi, Alessio; Tartaglia, Francesco; Tambone, Vittoradolfo

    2015-01-01

    The present pilot study aims at analyzing the human activity of playing in the light of an indicator of human ecology (HE). We highlighted the four essential anthropological dimensions (FEAD), starting from the analysis of questionnaires administered to actual gamers. The coherence between theoretical construct and observational data is a remarkable proof-of-concept of the possibility of establishing an experimentally motivated link between a philosophical construct (coming from Huizinga's Homo ludens definition) and actual gamers' motivation pattern. The starting hypothesis is that the activity of playing becomes ecological (and thus not harmful) when it achieves the harmony between the FEAD, thus realizing HE; conversely, it becomes at risk of creating some form of addiction, when destroying FEAD balance. We analyzed the data by means of variable clustering (oblique principal components) so to experimentally verify the existence of the hypothesized dimensions. The subsequent projection of statistical units (gamers) on the orthogonal space spanned by principal components allowed us to generate a meaningful, albeit preliminary, clusterization of gamer profiles.

  17. WFIRST: Principal Components Analysis of H4RG-10 Near-IR Detector Data Cubes

    NASA Astrophysics Data System (ADS)

    Rauscher, Bernard

    2018-01-01

    The Wide Field Infrared Survey Telescope’s (WFIRST) Wide Field Instrument (WFI) incorporates an array of eighteen Teledyne H4RG-10 near-IR detector arrays. Because WFIRST’s science investigations require controlling systematic uncertainties to state-of-the-art levels, we conducted principal components analysis (PCA) of some H4RG-10 test data obtained in the NASA Goddard Space Flight Center Detector Characterization Laboratory (DCL). The PCA indicates that the Legendre polynomials provide a nearly orthogonal representation of up-the-ramp sampled illuminated data cubes, and suggests other representations that may provide an even more compact representation of the data in some circumstances. We hypothesize that by using orthogonal representations, such as those described here, it may be possible to control systematic errors better than has been achieved before for NASA missions. We believe that these findings are probably applicable to other H4RG, H2RG, and H1RG based systems.

  18. Cross-visit tumor sub-segmentation and registration with outlier rejection for dynamic contrast-enhanced MRI time series data.

    PubMed

    Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M

    2010-01-01

    Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.

  19. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  20. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  1. State and group dynamics of world stock market by principal component analysis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Lee, Jae Woo

    2016-05-01

    We study the dynamic interactions and structural changes by a principal component analysis (PCA) to cross-correlation coefficients of global financial indices in the years 1998-2012. The variances explained by the first PC increase with time and show a drastic change during the crisis. A sharp change in PC coefficient implies a transition of market state, a situation which occurs frequently in the American and Asian indices. However, the European indices remain stable over time. Using the first two PC coefficients, we identify indices that are similar and more strongly correlated than the others. We observe that the European indices form a robust group over the observation period. The dynamics of the individual indices within the group increase in similarity with time, and the dynamics of indices are more similar during the crises. Furthermore, the group formation of indices changes position in two-dimensional spaces due to crises. Finally, after a financial crisis, the difference of PCs between the European and American indices narrows.

  2. Principal component analysis for fermionic critical points

    NASA Astrophysics Data System (ADS)

    Costa, Natanael C.; Hu, Wenjian; Bai, Z. J.; Scalettar, Richard T.; Singh, Rajiv R. P.

    2017-11-01

    We use determinant quantum Monte Carlo (DQMC), in combination with the principal component analysis (PCA) approach to unsupervised learning, to extract information about phase transitions in several of the most fundamental Hamiltonians describing strongly correlated materials. We first explore the zero-temperature antiferromagnet to singlet transition in the periodic Anderson model, the Mott insulating transition in the Hubbard model on a honeycomb lattice, and the magnetic transition in the 1/6-filled Lieb lattice. We then discuss the prospects for learning finite temperature superconducting transitions in the attractive Hubbard model, for which there is no sign problem. Finally, we investigate finite temperature charge density wave (CDW) transitions in the Holstein model, where the electrons are coupled to phonon degrees of freedom, and carry out a finite size scaling analysis to determine Tc. We examine the different behaviors associated with Hubbard-Stratonovich auxiliary field configurations on both the entire space-time lattice and on a single imaginary time slice, or other quantities, such as equal-time Green's and pair-pair correlation functions.

  3. SESNPCA: Principal Component Analysis Applied to Stripped-Envelope Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Williamson, Marc; Bianco, Federica; Modjaz, Maryam

    2018-01-01

    In the new era of time-domain astronomy, it will become increasingly important to have rigorous, data driven models for classifying transients, including supernovae (SNe). We present the first application of principal component analysis (PCA) to stripped-envelope core-collapse supernovae (SESNe). Previous studies of SNe types Ib, IIb, Ic, and broad-line Ic (Ic-BL) focus only on specific spectral features, while our PCA algorithm uses all of the information contained in each spectrum. We use one of the largest compiled datasets of SESNe, containing over 150 SNe, each with spectra taken at multiple phases. Our work focuses on 49 SNe with spectra taken 15 ± 5 days after maximum V-band light where better distinctions can be made between SNe type Ib and Ic spectra. We find that spectra of SNe type IIb and Ic-BL are separable from the other types in PCA space, indicating that PCA is a promising option for developing a purely data driven model for SESNe classification.

  4. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  5. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  6. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  7. Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images

    PubMed Central

    Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali

    2015-01-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077

  8. A New 4D Trajectory-Based Approach Unveils Abnormal LV Revolution Dynamics in Hypertrophic Cardiomyopathy

    PubMed Central

    Madeo, Andrea; Piras, Paolo; Re, Federica; Gabriele, Stefano; Nardinocchi, Paola; Teresi, Luciano; Torromeo, Concetta; Chialastri, Claudia; Schiariti, Michele; Giura, Geltrude; Evangelista, Antonietta; Dominici, Tania; Varano, Valerio; Zachara, Elisabetta; Puddu, Paolo Emilio

    2015-01-01

    The assessment of left ventricular shape changes during cardiac revolution may be a new step in clinical cardiology to ease early diagnosis and treatment. To quantify these changes, only point registration was adopted and neither Generalized Procrustes Analysis nor Principal Component Analysis were applied as we did previously to study a group of healthy subjects. Here, we extend to patients affected by hypertrophic cardiomyopathy the original approach and preliminarily include genotype positive/phenotype negative individuals to explore the potential that incumbent pathology might also be detected. Using 3D Speckle Tracking Echocardiography, we recorded left ventricular shape of 48 healthy subjects, 24 patients affected by hypertrophic cardiomyopathy and 3 genotype positive/phenotype negative individuals. We then applied Generalized Procrustes Analysis and Principal Component Analysis and inter-individual differences were cleaned by Parallel Transport performed on the tangent space, along the horizontal geodesic, between the per-subject consensuses and the grand mean. Endocardial and epicardial layers were evaluated separately, different from many ecocardiographic applications. Under a common Principal Component Analysis, we then evaluated left ventricle morphological changes (at both layers) explained by first Principal Component scores. Trajectories’ shape and orientation were investigated and contrasted. Logistic regression and Receiver Operating Characteristic curves were used to compare these morphometric indicators with traditional 3D Speckle Tracking Echocardiography global parameters. Geometric morphometrics indicators performed better than 3D Speckle Tracking Echocardiography global parameters in recognizing pathology both in systole and diastole. Genotype positive/phenotype negative individuals clustered with patients affected by hypertrophic cardiomyopathy during diastole, suggesting that incumbent pathology may indeed be foreseen by these methods. Left ventricle deformation in patients affected by hypertrophic cardiomyopathy compared to healthy subjects may be assessed by modern shape analysis better than by traditional 3D Speckle Tracking Echocardiography global parameters. Hypertrophic cardiomyopathy pathophysiology was unveiled in a new manner whereby also diastolic phase abnormalities are evident which is more difficult to investigate by traditional ecocardiographic techniques. PMID:25875818

  9. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  10. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis.

    PubMed

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-29

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  11. Effects of mutation, truncation, and temperature on the folding kinetics of a WW domain.

    PubMed

    Maisuradze, Gia G; Zhou, Rui; Liwo, Adam; Xiao, Yi; Scheraga, Harold A

    2012-07-20

    The purpose of this work is to show how mutation, truncation, and change of temperature can influence the folding kinetics of a protein. This is accomplished by principal component analysis of molecular-dynamics-generated folding trajectories of the triple β-strand WW domain from formin binding protein 28 (FBP28) (Protein Data Bank ID: 1E0L) and its full-size, and singly- and doubly-truncated mutants at temperatures below and very close to the melting point. The reasons for biphasic folding kinetics [i.e., coexistence of slow (three-state) and fast (two-state) phases], including the involvement of a solvent-exposed hydrophobic cluster and another delocalized hydrophobic core in the folding kinetics, are discussed. New folding pathways are identified in free-energy landscapes determined in terms of principal components for full-size mutants. Three-state folding is found to be a main mechanism for folding the FBP28 WW domain and most of the full-size and truncated mutants. The results from the theoretical analysis are compared to those from experiment. Agreements and discrepancies between the theoretical and experimental results are discussed. Because of its importance in understanding protein kinetics and function, the diffusive mechanism by which the FBP28 WW domain and its full-size and truncated mutants explore their conformational space is examined in terms of the mean-square displacement and principal component analysis eigenvalue spectrum analyses. Subdiffusive behavior is observed for all studied systems. Copyright © 2012. Published by Elsevier Ltd.

  12. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  13. Nonparametric method for genomics-based prediction of performance of quantitative traits involving epistasis in plant breeding.

    PubMed

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression.

  14. Nonparametric Method for Genomics-Based Prediction of Performance of Quantitative Traits Involving Epistasis in Plant Breeding

    PubMed Central

    Sun, Xiaochun; Ma, Ping; Mumm, Rita H.

    2012-01-01

    Genomic selection (GS) procedures have proven useful in estimating breeding value and predicting phenotype with genome-wide molecular marker information. However, issues of high dimensionality, multicollinearity, and the inability to deal effectively with epistasis can jeopardize accuracy and predictive ability. We, therefore, propose a new nonparametric method, pRKHS, which combines the features of supervised principal component analysis (SPCA) and reproducing kernel Hilbert spaces (RKHS) regression, with versions for traits with no/low epistasis, pRKHS-NE, to high epistasis, pRKHS-E. Instead of assigning a specific relationship to represent the underlying epistasis, the method maps genotype to phenotype in a nonparametric way, thus requiring fewer genetic assumptions. SPCA decreases the number of markers needed for prediction by filtering out low-signal markers with the optimal marker set determined by cross-validation. Principal components are computed from reduced marker matrix (called supervised principal components, SPC) and included in the smoothing spline ANOVA model as independent variables to fit the data. The new method was evaluated in comparison with current popular methods for practicing GS, specifically RR-BLUP, BayesA, BayesB, as well as a newer method by Crossa et al., RKHS-M, using both simulated and real data. Results demonstrate that pRKHS generally delivers greater predictive ability, particularly when epistasis impacts trait expression. Beyond prediction, the new method also facilitates inferences about the extent to which epistasis influences trait expression. PMID:23226325

  15. An Introductory Application of Principal Components to Cricket Data

    ERIC Educational Resources Information Center

    Manage, Ananda B. W.; Scariano, Stephen M.

    2013-01-01

    Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…

  16. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  17. Identifying apple surface defects using principal components analysis and artifical neural networks

    USDA-ARS?s Scientific Manuscript database

    Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...

  18. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  19. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    PubMed

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  20. High-dimensional inference with the generalized Hopfield model: principal component analysis and corrections.

    PubMed

    Cocco, S; Monasson, R; Sessak, V

    2011-05-01

    We consider the problem of inferring the interactions between a set of N binary variables from the knowledge of their frequencies and pairwise correlations. The inference framework is based on the Hopfield model, a special case of the Ising model where the interaction matrix is defined through a set of patterns in the variable space, and is of rank much smaller than N. We show that maximum likelihood inference is deeply related to principal component analysis when the amplitude of the pattern components ξ is negligible compared to √N. Using techniques from statistical mechanics, we calculate the corrections to the patterns to the first order in ξ/√N. We stress the need to generalize the Hopfield model and include both attractive and repulsive patterns in order to correctly infer networks with sparse and strong interactions. We present a simple geometrical criterion to decide how many attractive and repulsive patterns should be considered as a function of the sampling noise. We moreover discuss how many sampled configurations are required for a good inference, as a function of the system size N and of the amplitude ξ. The inference approach is illustrated on synthetic and biological data.

  1. Lightweight biometric detection system for human classification using pyroelectric infrared detectors.

    PubMed

    Burchett, John; Shankar, Mohan; Hamza, A Ben; Guenther, Bob D; Pitsianis, Nikos; Brady, David J

    2006-05-01

    We use pyroelectric detectors that are differential in nature to detect motion in humans by their heat emissions. Coded Fresnel lens arrays create boundaries that help to localize humans in space as well as to classify the nature of their motion. We design and implement a low-cost biometric tracking system by using off-the-shelf components. We demonstrate two classification methods by using data gathered from sensor clusters of dual-element pyroelectric detectors with coded Fresnel lens arrays. We propose two algorithms for person identification, a more generalized spectral clustering method and a more rigorous example that uses principal component regression to perform a blind classification.

  2. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  3. Regional assessment of trends in vegetation change dynamics using principal component analysis

    NASA Astrophysics Data System (ADS)

    Osunmadewa, B. A.; Csaplovics, E.; R. A., Majdaldin; Adeofun, C. O.; Aralova, D.

    2016-10-01

    Vegetation forms the basis for the existence of animal and human. Due to changes in climate and human perturbation, most of the natural vegetation of the world has undergone some form of transformation both in composition and structure. Increased anthropogenic activities over the last decades had pose serious threat on the natural vegetation in Nigeria, many vegetated areas are either transformed to other land use such as deforestation for agricultural purpose or completely lost due to indiscriminate removal of trees for charcoal, fuelwood and timber production. This study therefore aims at examining the rate of change in vegetation cover, the degree of change and the application of Principal Component Analysis (PCA) in the dry sub-humid region of Nigeria using Normalized Difference Vegetation Index (NDVI) data spanning from 1983-2011. The method used for the analysis is the T-mode orientation approach also known as standardized PCA, while trends are examined using ordinary least square, median trend (Theil-Sen) and monotonic trend. The result of the trend analysis shows both positive and negative trend in vegetation change dynamics over the 29 years period examined. Five components were used for the Principal Component Analysis. The results of the first component explains about 98 % of the total variance of the vegetation (NDVI) while components 2-5 have lower variance percentage (< 1%). Two ancillary land use land cover data of 2000 and 2009 from European Space Agency (ESA) were used to further explain changes observed in the Normalized Difference Vegetation Index. The result of the land use data shows changes in land use pattern which can be attributed to anthropogenic activities such as cutting of trees for charcoal production, fuelwood and agricultural practices. The result of this study shows the ability of remote sensing data for monitoring vegetation change in the dry-sub humid region of Nigeria.

  4. Directly reconstructing principal components of heterogeneous particles from cryo-EM images.

    PubMed

    Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali

    2015-08-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule... management plan. (c) Operator training and qualification. (d) Emission limitations and operating limits. (e...

  6. 40 CFR 60.2570 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What are the principal components of... Construction On or Before November 30, 1999 Use of Model Rule § 60.2570 What are the principal components of... (k) of this section. (a) Increments of progress toward compliance. (b) Waste management plan. (c...

  7. Principal Component Analysis of Lipid Molecule Conformational Changes in Molecular Dynamics Simulations.

    PubMed

    Buslaev, Pavel; Gordeliy, Valentin; Grudinin, Sergei; Gushchin, Ivan

    2016-03-08

    Molecular dynamics simulations of lipid bilayers are ubiquitous nowadays. Usually, either global properties of the bilayer or some particular characteristics of each lipid molecule are evaluated in such simulations, but the structural properties of the molecules as a whole are rarely studied. Here, we show how a comprehensive quantitative description of conformational space and dynamics of a single lipid molecule can be achieved via the principal component analysis (PCA). We illustrate the approach by analyzing and comparing simulations of DOPC bilayers obtained using eight different force fields: all-atom generalized AMBER, CHARMM27, CHARMM36, Lipid14, and Slipids and united-atom Berger, GROMOS43A1-S3, and GROMOS54A7. Similarly to proteins, most of the structural variance of a lipid molecule can be described by only a few principal components. These major components are similar in different simulations, although there are notable distinctions between the older and newer force fields and between the all-atom and united-atom force fields. The DOPC molecules in the simulations generally equilibrate on the time scales of tens to hundreds of nanoseconds. The equilibration is the slowest in the GAFF simulation and the fastest in the Slipids simulation. Somewhat unexpectedly, the equilibration in the united-atom force fields is generally slower than in the all-atom force fields. Overall, there is a clear separation between the more variable previous generation force fields and significantly more similar new generation force fields (CHARMM36, Lipid14, Slipids). We expect that the presented approaches will be useful for quantitative analysis of conformations and dynamics of individual lipid molecules in other simulations of lipid bilayers.

  8. Principal semantic components of language and the measurement of meaning.

    PubMed

    Samsonovich, Alexei V; Samsonovic, Alexei V; Ascoli, Giorgio A

    2010-06-11

    Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence), "calm/excited" (arousal), and "open/closed" (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a universal metric system for semantics of human experiences, which is necessary for developing a rigorous science of the mind.

  9. Restoring Proprioception via a Cortical Prosthesis: A Novel Learning-Based Approach

    DTIC Science & Technology

    2015-10-01

    AWARD NUMBER: W81XWH-14-1-0510 TITLE: Restoring Proprioception via a Cortical Prosthesis : A Novel Learning-Based Approach PRINCIPAL INVESTIGATOR...Proprioception via a Cortical Prosthesis : A Novel Learning-Based Approach 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Philip Sabes, PhD 5d...component of this lost sensation is proprioception, the feeling of where the body is in space. The importance of proprioception is often not appreciated

  10. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  11. Principal Workload: Components, Determinants and Coping Strategies in an Era of Standardization and Accountability

    ERIC Educational Resources Information Center

    Oplatka, Izhar

    2017-01-01

    Purpose: In order to fill the gap in theoretical and empirical knowledge about the characteristics of principal workload, the purpose of this paper is to explore the components of principal workload as well as its determinants and the coping strategies commonly used by principals to face this personal state. Design/methodology/approach:…

  12. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  13. In-situ resource utilization activities at the NASA Space Engineering Research Center

    NASA Technical Reports Server (NTRS)

    Ramohalli, Kumar

    1992-01-01

    The paper describes theoretical and experimental research activities at the NASA Space Engineering Research Center aimed at realizing significant cost savings in space missions through the use of locally available resources. The fundamental strategy involves idea generation, scientific screening, feasibility demonstrations, small-scale process plant design, extensive testing, scale-up to realistic production rates, associated controls, and 'packaging', while maintaining sufficient flexibility to respond to national needs in terms of specific applications. Aside from training, the principal activities at the Center include development of a quantitative figure-of-merit to quickly assess the overall mission impact of individual components that constantly change with advancing technologies, extensive tests on a single-cell test bed to produce oxygen from carbon dioxide, and the use of this spent stream to produce methane.

  14. The Influence Function of Principal Component Analysis by Self-Organizing Rule.

    PubMed

    Higuchi; Eguchi

    1998-07-28

    This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.

  15. 3D Visualization of an Invariant Display Strategy for Hyperspectral Imagery

    DTIC Science & Technology

    2002-12-01

    to Remote Sensing, New York, New York: Guillford Press, March 2002. Deitel , H. M., Deitel , P. J., Nieto, T. R. and Lin, T. M., XML How to Program ...Principal Component Analysis (PCA) to rotate the data into a coordinate space, which can be used to display the data. This thesis demonstrates how to ...radiation band is the natural unit of data organization, the BSQ format is also easy to implement. Figure 2.5 shows how a scene originally sensed

  16. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  17. Climatic niche evolution in New World monkeys (Platyrrhini).

    PubMed

    Duran, Andressa; Meyer, Andreas L S; Pie, Marcio R

    2013-01-01

    Despite considerable interest in recent years on species distribution modeling and phylogenetic niche conservatism, little is known about the way in which climatic niches change over evolutionary time. This knowledge is of major importance to understand the mechanisms underlying limits of species distributions, as well as to infer how different lineages might be affected by anthropogenic climate change. In this study we investigate the tempo and mode climatic niche evolution in New World monkeys (Platyrrhini). Climatic conditions found throughout the distribution of 140 primate species were investigated using a principal component analysis, which indicated that mean temperature (particularly during the winter) is the most important climatic correlate of platyrrhine geographical distributions, accounting for nearly half of the interspecific variation in climatic niches. The effects of precipitation were associated with the second principal component, particularly with respect to the dry season. When models of trait evolution were fit to scores on each of the principal component axes, significant phylogenetic signal was detected for PC1 scores, but not for PC2 scores. Interestingly, although all platyrrhine families occupied comparable regions of climatic space, some aotid species such as Aotus lemurinus, A. jorgehernandezi, and A. miconax show highly distinctive climatic niches associated with drier conditions (high PC2 scores). This shift might have been made possible by their nocturnal habits, which could serve as an exaptation that allow them to be less constrained by humidity during the night. These results underscore the usefulness of investigating explicitly the tempo and mode of climatic niche evolution and its role in determining species distributions.

  18. Web document ranking via active learning and kernel principal component analysis

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Chen, Honghui; Shu, Zhen

    2015-09-01

    Web document ranking arises in many information retrieval (IR) applications, such as the search engine, recommendation system and online advertising. A challenging issue is how to select the representative query-document pairs and informative features as well for better learning and exploring new ranking models to produce an acceptable ranking list of candidate documents of each query. In this study, we propose an active sampling (AS) plus kernel principal component analysis (KPCA) based ranking model, viz. AS-KPCA Regression, to study the document ranking for a retrieval system, i.e. how to choose the representative query-document pairs and features for learning. More precisely, we fill those documents gradually into the training set by AS such that each of which will incur the highest expected DCG loss if unselected. Then, the KPCA is performed via projecting the selected query-document pairs onto p-principal components in the feature space to complete the regression. Hence, we can cut down the computational overhead and depress the impact incurred by noise simultaneously. To the best of our knowledge, we are the first to perform the document ranking via dimension reductions in two dimensions, namely, the number of documents and features simultaneously. Our experiments demonstrate that the performance of our approach is better than that of the baseline methods on the public LETOR 4.0 datasets. Our approach brings an improvement against RankBoost as well as other baselines near 20% in terms of MAP metric and less improvements using P@K and NDCG@K, respectively. Moreover, our approach is particularly suitable for document ranking on the noisy dataset in practice.

  19. Laboratory-directed research and development: FY 1996 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, J.; Prono, J.

    1997-05-01

    This report summarizes the FY 1996 goals and accomplishments of Laboratory-Directed Research and Development (LDRD) projects. It gives an overview of the LDRD program, summarizes work done on individual research projects, and provides an index to the projects` principal investigators. Projects are grouped by their LDRD component: Individual Projects, Competency Development, and Program Development. Within each component, they are further divided into nine technical disciplines: (1) materials science, (2) engineering and base technologies, (3) plasmas, fluids, and particle beams, (4) chemistry, (5) mathematics and computational sciences, (6) atomic and molecular physics, (7) geoscience, space science, and astrophysics, (8) nuclear andmore » particle physics, and (9) biosciences.« less

  20. Use of principal-component, correlation, and stepwise multiple-regression analyses to investigate selected physical and hydraulic properties of carbonate-rock aquifers

    USGS Publications Warehouse

    Brown, C. Erwin

    1993-01-01

    Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.

  1. Variety identification of brown sugar using short-wave near infrared spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Yang, Haiqing; Wu, Di; He, Yong

    2007-11-01

    Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.

  2. Linear measurements of the neurocranium are better indicators of population differences than those of the facial skeleton: comparative study of 1,961 skulls.

    PubMed

    Holló, Gábor; Szathmáry, László; Marcsik, Antónia; Barta, Zoltán

    2010-02-01

    The aim of this study is to individualize potential differences between two cranial regions used to differentiate human populations. We compared the neurocranium and the facial skeleton using skulls from the Great Hungarian Plain. The skulls date to the 1st-11th centuries, a long space of time that encompasses seven archaeological periods. We analyzed six neurocranial and seven facial measurements. The reduction of the number of variables was carried out using principal components analysis. Linear mixed-effects models were fitted to the principal components of each archaeological period, and then the models were compared using multiple pairwise tests. The neurocranium showed significant differences in seven cases between nonsubsequent periods and in one case, between two subsequent populations. For the facial skeleton, no significant results were found. Our results, which are also compared to previous craniofacial heritability estimates, suggest that the neurocranium is a more conservative region and that population differences can be pointed out better in the neurocranium than in the facial skeleton.

  3. Genetic algorithm applied to the selection of factors in principal component-artificial neural networks: application to QSAR study of calcium channel antagonist activity of 1,4-dihydropyridines (nifedipine analogous).

    PubMed

    Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba

    2003-01-01

    A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.

  4. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  5. 40 CFR 62.14505 - What are the principal components of this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 8 2010-07-01 2010-07-01 false What are the principal components of this subpart? 62.14505 Section 62.14505 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... components of this subpart? This subpart contains the eleven major components listed in paragraphs (a...

  6. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of the sex classification is related to the choice of PCs. The highest sex classification rate is 91.43% using our method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Multivariate image analysis of laser-induced photothermal imaging used for detection of caries tooth

    NASA Astrophysics Data System (ADS)

    El-Sherif, Ashraf F.; Abdel Aziz, Wessam M.; El-Sharkawy, Yasser H.

    2010-08-01

    Time-resolved photothermal imaging has been investigated to characterize tooth for the purpose of discriminating between normal and caries areas of the hard tissue using thermal camera. Ultrasonic thermoelastic waves were generated in hard tissue by the absorption of fiber-coupled Q-switched Nd:YAG laser pulses operating at 1064 nm in conjunction with a laser-induced photothermal technique used to detect the thermal radiation waves for diagnosis of human tooth. The concepts behind the use of photo-thermal techniques for off-line detection of caries tooth features were presented by our group in earlier work. This paper illustrates the application of multivariate image analysis (MIA) techniques to detect the presence of caries tooth. MIA is used to rapidly detect the presence and quantity of common caries tooth features as they scanned by the high resolution color (RGB) thermal cameras. Multivariate principal component analysis is used to decompose the acquired three-channel tooth images into a two dimensional principal components (PC) space. Masking score point clusters in the score space and highlighting corresponding pixels in the image space of the two dominant PCs enables isolation of caries defect pixels based on contrast and color information. The technique provides a qualitative result that can be used for early stage caries tooth detection. The proposed technique can potentially be used on-line or real-time resolved to prescreen the existence of caries through vision based systems like real-time thermal camera. Experimental results on the large number of extracted teeth as well as one of the thermal image panoramas of the human teeth voltanteer are investigated and presented.

  8. Neuromodulation and Synaptic Plasticity for the Control of Fast Periodic Movement: Energy Efficiency in Coupled Compliant Joints via PCA.

    PubMed

    Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin

    2016-01-01

    There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints.

  9. Neuromodulation and Synaptic Plasticity for the Control of Fast Periodic Movement: Energy Efficiency in Coupled Compliant Joints via PCA

    PubMed Central

    Stratmann, Philipp; Lakatos, Dominic; Albu-Schäffer, Alin

    2016-01-01

    There are multiple indications that the nervous system of animals tunes muscle output to exploit natural dynamics of the elastic locomotor system and the environment. This is an advantageous strategy especially in fast periodic movements, since the elastic elements store energy and increase energy efficiency and movement speed. Experimental evidence suggests that coordination among joints involves proprioceptive input and neuromodulatory influence originating in the brain stem. However, the neural strategies underlying the coordination of fast periodic movements remain poorly understood. Based on robotics control theory, we suggest that the nervous system implements a mechanism to accomplish coordination between joints by a linear coordinate transformation from the multi-dimensional space representing proprioceptive input at the joint level into a one-dimensional controller space. In this one-dimensional subspace, the movements of a whole limb can be driven by a single oscillating unit as simple as a reflex interneuron. The output of the oscillating unit is transformed back to joint space via the same transformation. The transformation weights correspond to the dominant principal component of the movement. In this study, we propose a biologically plausible neural network to exemplify that the central nervous system (CNS) may encode our controller design. Using theoretical considerations and computer simulations, we demonstrate that spike-timing-dependent plasticity (STDP) for the input mapping and serotonergic neuromodulation for the output mapping can extract the dominant principal component of sensory signals. Our simulations show that our network can reliably control mechanical systems of different complexity and increase the energy efficiency of ongoing cyclic movements. The proposed network is simple and consistent with previous biologic experiments. Thus, our controller could serve as a candidate to describe the neural control of fast, energy-efficient, periodic movements involving multiple coupled joints. PMID:27014051

  10. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  11. Rapid differentiation of Chinese hop varieties (Humulus lupulus) using volatile fingerprinting by HS-SPME-GC-MS combined with multivariate statistical analysis.

    PubMed

    Liu, Zechang; Wang, Liping; Liu, Yumei

    2018-01-18

    Hops impart flavor to beer, with the volatile components characterizing the various hop varieties and qualities. Fingerprinting, especially flavor fingerprinting, is often used to identify 'flavor products' because inconsistencies in the description of flavor may lead to an incorrect definition of beer quality. Compared to flavor fingerprinting, volatile fingerprinting is simpler and easier. We performed volatile fingerprinting using head space-solid phase micro-extraction gas chromatography-mass spectrometry combined with similarity analysis and principal component analysis (PCA) for evaluating and distinguishing between three major Chinese hops. Eighty-four volatiles were identified, which were classified into seven categories. Volatile fingerprinting based on similarity analysis did not yield any obvious result. By contrast, hop varieties and qualities were identified using volatile fingerprinting based on PCA. The potential variables explained the variance in the three hop varieties. In addition, the dendrogram and principal component score plot described the differences and classifications of hops. Volatile fingerprinting plus multivariate statistical analysis can rapidly differentiate between the different varieties and qualities of the three major Chinese hops. Furthermore, this method can be used as a reference in other fields. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  12. The Reflexive Adaptations of School Principals in a "Local" South African Space

    ERIC Educational Resources Information Center

    Fataar, Aslam

    2009-01-01

    This paper is an analysis of the work of three principals in an impoverished black township in post-apartheid South Africa. Based on qualitative approaches, it discusses the principals' entry into the township, and their navigation of their schools' surrounding social dynamics. It combines the lenses of "space" and…

  13. Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki

    2004-04-01

    We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.

  14. Rear shape in 3 dimensions summarized by principal component analysis is a good predictor of body condition score in Holstein dairy cows.

    PubMed

    Fischer, A; Luginbühl, T; Delattre, L; Delouard, J M; Faverdin, P

    2015-07-01

    Body condition is an indirect estimation of the level of body reserves, and its variation reflects cumulative variation in energy balance. It interacts with reproductive and health performance, which are important to consider in dairy production but not easy to monitor. The commonly used body condition score (BCS) is time consuming, subjective, and not very sensitive. The aim was therefore to develop and validate a method assessing BCS with 3-dimensional (3D) surfaces of the cow's rear. A camera captured 3D shapes 2 m from the floor in a weigh station at the milking parlor exit. The BCS was scored by 3 experts on the same day as 3D imaging. Four anatomical landmarks had to be identified manually on each 3D surface to define a space centered on the cow's rear. A set of 57 3D surfaces from 56 Holstein dairy cows was selected to cover a large BCS range (from 0.5 to 4.75 on a 0 to 5 scale) to calibrate 3D surfaces on BCS. After performing a principal component analysis on this data set, multiple linear regression was fitted on the coordinates of these surfaces in the principal components' space to assess BCS. The validation was performed on 2 external data sets: one with cows used for calibration, but at a different lactation stage, and one with cows not used for calibration. Additionally, 6 cows were scanned once and their surfaces processed 8 times each for repeatability and then these cows were scanned 8 times each the same day for reproducibility. The selected model showed perfect calibration and a good but weaker validation (root mean square error=0.31 for the data set with cows used for calibration; 0.32 for the data set with cows not used for calibration). Assessing BCS with 3D surfaces was 3 times more repeatable (standard error=0.075 versus 0.210 for BCS) and 2.8 times more reproducible than manually scored BCS (standard error=0.103 versus 0.280 for BCS). The prediction error was similar for both validation data sets, indicating that the method is not less efficient for cows not used for calibration. The major part of reproducibility error incorporates repeatability error. An automation of the anatomical landmarks identification is required, first to allow broadband measures of body condition and second to improve repeatability and consequently reproducibility. Assessing BCS using 3D imaging coupled with principal component analysis appears to be a very promising means of improving precision and feasibility of this trait measurement. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Principals' Perceptions Regarding Their Supervision and Evaluation

    ERIC Educational Resources Information Center

    Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann

    2015-01-01

    This study examined the perceptions of principals concerning principal evaluation and supervisory feedback. Principals were asked two open-ended questions. Respondents included 82 principals in the Rocky Mountain region. The emerging themes were "Superintendent Performance," "Principal Evaluation Components," "Specific…

  16. Changes in element contents of four lichens over 11 years in the Boundary Waters Canoe Area Wilderness, northern Minnesota

    USGS Publications Warehouse

    Bennett, J.P.; Wetmore, C.M.

    1999-01-01

    Four species of lichen (Cladina rangiferina, Evernia mesomorpha, Hypogymnia physodes, and Parmelia sulcata) were sampled at six locations in the Boundary Waters Canoe Area Wilderness three times over a span of 11 years and analyzed for concentrations of 16 chemical elements to test the hypotheses that corticolous species would accumulate higher amounts of chemical elements than terricolous species, and that 11 years were sufficient to detect spatial patterns and temporal trends in element contents. Multivariate analyses of over 2770 data points revealed two principal components that accounted for 68% of the total variance in the data. These two components, the first highly loaded with Al, B, Cr, Fe, Ni and S, and the second loaded with Ca, Cd, Mg and Mn, were inversely related to each other over time and space. The first component was interpreted as consisting of an anthropogenic and a dust component, while the second, primarily a nutritional component. Cu, K, Na, P, Pb and Zn were not highly loaded on either component. Component 1 decreased significantly over the 11 years and from west to east, while component 2 increased. The corticolous species were more enriched in heavy metals than the terricolous species. All four elements in component 2 in H. physodes were above enrichment thresholds for this species. Species differences on the two components were greater than the effects of time and space, suggesting that biomonitoring with lichens is strongly species dependent. Some localities in the Boundary Waters Canoe Area Wilderness appear enriched in some anthropogenic elements for no obvious reasons.

  17. Study of liquid oxygen/liquid hydrogen auxiliary propulsion systems for the space tug

    NASA Technical Reports Server (NTRS)

    Nichols, J. F.

    1975-01-01

    Design concepts are considered that permit use of a liquid-liquid (as opposed to gas-gas) oxygen/hydrogen thrust chamber for attitude control and auxiliary propulsion thrusters on the space tug. The best of the auxiliary propulsion system concepts are defined and their principal characteristics, including cost as well as operational capabilities, are established. Design requirements for each of the major components of the systems, including thrusters, are developed at the conceptual level. The competitive concepts considered use both dedicated (separate tanks) and integrated (propellant from main propulsion tanks) propellant supply. The integrated concept is selected as best for the space tug after comparative evaluation against both cryogenic and storable propellant dedicated systems. A preliminary design of the selected system is established and recommendations for supporting research and technology to further the concept are presented.

  18. Electric motor designs for attenuating torque disturbance in sensitive space mechanisms

    NASA Astrophysics Data System (ADS)

    Marks, David B.; Fink, Richard A.

    2003-09-01

    When a motion control system introduces unwanted torque jitter and motion anomalies into sensitive space flight optical or positioning mechanisms, the pointing accuracy, positioning capability, or scanning resolution of the mission suffers. Special motion control technology must be employed to provide attenuation of the harmful torque disturbances. Brushless DC (BLDC) Motors with low torque disturbance characteristics have been successfully used on such notable missions as the Hubble Space Telescope when conventional approaches to motor design would not work. Motor designs for low disturbance mechanisms can include two and three phase sinusoidal BLDC motors, BLDC motors without iron teeth, and sometimes skewed or non-integral slot designs for motors commutated with Hall effect devices. The principal components of motor torque disturbance, successful BLDC motor designs for attenuating disturbances, and design trade-offs for optimum performance are examined.

  19. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  20. [Assessment of the strength of tobacco control on creating smoke-free hospitals using principal components analysis].

    PubMed

    Liu, Hui-lin; Wan, Xia; Yang, Gong-huan

    2013-02-01

    To explore the relationship between the strength of tobacco control and the effectiveness of creating smoke-free hospital, and summarize the main factors that affect the program of creating smoke-free hospitals. A total of 210 hospitals from 7 provinces/municipalities directly under the central government were enrolled in this study using stratified random sampling method. Principle component analysis and regression analysis were conducted to analyze the strength of tobacco control and the effectiveness of creating smoke-free hospitals. Two principal components were extracted in the strength of tobacco control index, which respectively reflected the tobacco control policies and efforts, and the willingness and leadership of hospital managers regarding tobacco control. The regression analysis indicated that only the first principal component was significantly correlated with the progression in creating smoke-free hospital (P<0.001), i.e. hospitals with higher scores on the first principal component had better achievements in smoke-free environment creation. Tobacco control policies and efforts are critical in creating smoke-free hospitals. The principal component analysis provides a comprehensive and objective tool for evaluating the creation of smoke-free hospitals.

  1. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    ERIC Educational Resources Information Center

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  2. Optimized principal component analysis on coronagraphic images of the fomalhaut system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkat, Tiffany; Kenworthy, Matthew A.; Quanz, Sascha P.

    We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar point spread function (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases themore » background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.« less

  3. Dynamic contact guidance of migrating cells

    NASA Astrophysics Data System (ADS)

    Losert, Wolfgang; Sun, Xiaoyu; Guven, Can; Driscoll, Meghan; Fourkas, John

    2014-03-01

    We investigate the effects of nanotopographical surfaces on the cell migration and cell shape dynamics of the amoeba Dictyostelium discoideum. Amoeboid motion exhibits significant contact guidance along surfaces with nanoscale ridges or grooves. We show quantitatively that nanoridges spaced 1.5 μm apart exhibit the greatest contact guidance efficiency. Using principal component analysis, we characterize the dynamics of the cell shape modulated by the coupling between the cell membrane and ridges. We show that motion parallel to the ridges is enhanced, while the turning, at the largest spatial scales, is suppressed. Since protrusion dynamics are principally governed by actin dynamics, we imaged the actin polymerization of cells on ridges. We found that actin polymerization occurs preferentially along nanoridges in a ``monorail'' like fashion. The ridges then provide us with a tool to study actin dynamics in an effectively reduced dimensional system.

  4. Hybrid Electrostatic/Flextensional Mirror for Lightweight, Large-Aperture, and Cryogenic Space Telescopes

    NASA Technical Reports Server (NTRS)

    Patrick, Brian; Moore, James; Hackenberger, Wesley; Jiang, Xiaoning

    2013-01-01

    A lightweight, cryogenically capable, scalable, deformable mirror has been developed for space telescopes. This innovation makes use of polymer-based membrane mirror technology to enable large-aperture mirrors that can be easily launched and deployed. The key component of this innovation is a lightweight, large-stroke, cryogenic actuator array that combines the high degree of mirror figure control needed with a large actuator influence function. The latter aspect of the innovation allows membrane mirror figure correction with a relatively low actuator density, preserving the lightweight attributes of the system. The principal components of this technology are lightweight, low-profile, high-stroke, cryogenic-capable piezoelectric actuators based on PMN-PT (piezoelectric lead magnesium niobate-lead titanate) single-crystal configured in a flextensional actuator format; high-quality, low-thermal-expansion polymer membrane mirror materials developed by NeXolve; and electrostatic coupling between the membrane mirror and the piezoelectric actuator assembly to minimize problems such as actuator print-through.

  5. [A study of Boletus bicolor from different areas using Fourier transform infrared spectrometry].

    PubMed

    Zhou, Zai-Jin; Liu, Gang; Ren, Xian-Pei

    2010-04-01

    It is hard to differentiate the same species of wild growing mushrooms from different areas by macromorphological features. In this paper, Fourier transform infrared (FTIR) spectroscopy combined with principal component analysis was used to identify 58 samples of boletus bicolor from five different areas. Based on the fingerprint infrared spectrum of boletus bicolor samples, principal component analysis was conducted on 58 boletus bicolor spectra in the range of 1 350-750 cm(-1) using the statistical software SPSS 13.0. According to the result, the accumulated contributing ratio of the first three principal components accounts for 88.87%. They included almost all the information of samples. The two-dimensional projection plot using first and second principal component is a satisfactory clustering effect for the classification and discrimination of boletus bicolor. All boletus bicolor samples were divided into five groups with a classification accuracy of 98.3%. The study demonstrated that wild growing boletus bicolor at species level from different areas can be identified by FTIR spectra combined with principal components analysis.

  6. How multi segmental patterns deviate in spastic diplegia from typical developed.

    PubMed

    Zago, Matteo; Sforza, Chiarella; Bona, Alessia; Cimolin, Veronica; Costici, Pier Francesco; Condoluci, Claudia; Galli, Manuela

    2017-10-01

    The relationship between gait features and coordination in children with Cerebral Palsy is not sufficiently analyzed yet. Principal Component Analysis can help in understanding motion patterns decomposing movement into its fundamental components (Principal Movements). This study aims at quantitatively characterizing the functional connections between multi-joint gait patterns in Cerebral Palsy. 65 children with spastic diplegia aged 10.6 (SD 3.7) years participated in standardized gait analysis trials; 31 typically developing adolescents aged 13.6 (4.4) years were also tested. To determine if posture affects gait patterns, patients were split into Crouch and knee Hyperextension group according to knee flexion angle at standing. 3D coordinates of hips, knees, ankles, metatarsal joints, pelvis and shoulders were submitted to Principal Component Analysis. Four Principal Movements accounted for 99% of global variance; components 1-3 explained major sagittal patterns, components 4-5 referred to movements on frontal plane and component 6 to additional movement refinements. Dimensionality was higher in patients than in controls (p<0.01), and the Crouch group significantly differed from controls in the application of components 1 and 4-6 (p<0.05), while the knee Hyperextension group in components 1-2 and 5 (p<0.05). Compensatory strategies of children with Cerebral Palsy (interactions between main and secondary movement patterns), were objectively determined. Principal Movements can reduce the effort in interpreting gait reports, providing an immediate and quantitative picture of the connections between movement components. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A reduction in ag/residential signature conflict using principal components analysis of LANDSAT temporal data

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Borden, F. Y.

    1977-01-01

    Methods to accurately delineate the types of land cover in the urban-rural transition zone of metropolitan areas were considered. The application of principal components analysis to multidate LANDSAT imagery was investigated as a means of reducing the overlap between residential and agricultural spectral signatures. The statistical concepts of principal components analysis were discussed, as well as the results of this analysis when applied to multidate LANDSAT imagery of the Washington, D.C. metropolitan area.

  8. Constrained Principal Component Analysis: Various Applications.

    ERIC Educational Resources Information Center

    Hunter, Michael; Takane, Yoshio

    2002-01-01

    Provides example applications of constrained principal component analysis (CPCA) that illustrate the method on a variety of contexts common to psychological research. Two new analyses, decompositions into finer components and fitting higher order structures, are presented, followed by an illustration of CPCA on contingency tables and the CPCA of…

  9. Closed-ecology life support systems /CELSS/ for long-duration, manned missions

    NASA Technical Reports Server (NTRS)

    Modell, M.; Spurlock, J. M.

    1979-01-01

    Studies were conducted to scope the principal areas of technology that can contribute to the development of closed-ecology life support systems (CELSS). Such systems may be required for future space activities, such as space stations, manufacturing facilities, or colonies. A major feature of CELSS is the regeneration of food from carbon in waste materials. Several processes, using biological and/or physico-chemical components, have been postulated for closing the recycle loop. At the present time, limits of available technical information preclude the specification of an optimum scheme. Nevertheless, the most significant technical requirements can be determined by way of an iterative procedure of formulating, evaluating and comparing various closed-system scenario. The functions features and applications of this systems engineering procedure are discussed.

  10. Energy Savings in Cellular Networks Based on Space-Time Structure of Traffic Loads

    NASA Astrophysics Data System (ADS)

    Sun, Jingbo; Wang, Yue; Yuan, Jian; Shan, Xiuming

    Since most of energy consumed by the telecommunication infrastructure is due to the Base Transceiver Station (BTS), switching off BTSs when traffic load is low has been recognized as an effective way of saving energy. In this letter, an energy saving scheme is proposed to minimize the number of active BTSs based on the space-time structure of traffic loads as determined by principal component analysis. Compared to existing methods, our approach models traffic loads more accurately, and has a much smaller input size. As it is implemented in an off-line manner, our scheme also avoids excessive communications and computing overheads. Simulation results show that the proposed method has a comparable performance in energy savings.

  11. 1999 LDRD Laboratory Directed Research and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rita Spencer; Kyle Wheeler

    This is the FY 1999 Progress Report for the Laboratory Directed Research and Development (LDRD) Program at Los Alamos National Laboratory. It gives an overview of the LDRD Program, summarizes work done on individual research projects, relates the projects to major Laboratory program sponsors, and provides an index to the principal investigators. Project summaries are grouped by their LDRD component: Competency Development, Program Development, and Individual Projects. Within each component, they are further grouped into nine technical categories: (1) materials science, (2) chemistry, (3) mathematics and computational science, (4) atomic, molecular, optical, and plasma physics, fluids, and particle beams, (5)more » engineering science, (6) instrumentation and diagnostics, (7) geoscience, space science, and astrophysics, (8) nuclear and particle physics, and (9) bioscience.« less

  12. Laboratory directed research and development: FY 1997 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, J.; Prono, J.

    1998-05-01

    This is the FY 1997 Progress Report for the Laboratory Directed Research and Development (LDRD) program at Los Alamos National Laboratory. It gives an overview of the LDRD program, summarizes work done on individual research projects, relates the projects to major Laboratory program sponsors, and provides an index to the principal investigators. Project summaries are grouped by their LDRD component: Competency Development, Program Development, and Individual Projects. Within each component, they are further grouped into nine technical categories: (1) materials science, (2) chemistry, (3) mathematics and computational science, (4) atomic and molecular physics and plasmas, fluids, and particle beams, (5)more » engineering science, (6) instrumentation and diagnostics, (7) geoscience, space science, and astrophysics, (8) nuclear and particle physics, and (9) bioscience.« less

  13. Fine scale habitat use by age-1 stocked muskellunge and wild northern pike in an upper St. Lawrence River bay

    USGS Publications Warehouse

    Farrell, John M.; Kapuscinski, Kevin L.; Underwood, Harold

    2014-01-01

    Radio telemetry of stocked muskellunge (n = 6) and wild northern pike (n = 6) was used to track late summer and fall movements from a common release point in a known shared nursery bay to test the hypothesis that age-1 northern pike and stocked muskellunge segregate and have different habitat affinities. Water depth, temperature, substrate and aquatic vegetation variables were estimated for each muskellunge (n = 103) and northern pike (n = 131) position and nested ANOVA comparisons by species indicated differences in habitat use. Muskellunge exhibited a greater displacement from the release point and used habitat in shallower water depths (mean = 0.85 m, SE = 0.10) than northern pike (mean = 1.45 m, SE = 0.08). Both principal components analysis (PCA) and principal components ordination (PCO) were used to interpret underlying gradients relative to fish positions in two-dimensional space. Our analysis indicated that a separation of age-1 northern pike and muskellunge occurred 7 d post-release. This first principal component explained 48% of the variation in habitat use. Northern pike locations were associated with deeper habitats that generally had softer silt substrates and dense submersed vegetation. Muskellunge locations post-acclimation showed greater association with shallower habitats containing firmer sandy and clay substrates and emergent vegetation. The observed differences in habitat use suggest that fine-scale ecological separation occurred between these stocked muskellunge and wild northern pike, but small sample sizes and potential for individual variation limit extension of these conclusions. Further research is needed to determine if these patterns exist between larger samples of fishes over a greater range of habitats.

  14. Principal component analysis vs. self-organizing maps combined with hierarchical clustering for pattern recognition in volcano seismic spectra

    NASA Astrophysics Data System (ADS)

    Unglert, K.; Radić, V.; Jellinek, A. M.

    2016-06-01

    Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as self-organizing maps (SOM) and principal component analysis (PCA) can help to quickly and automatically identify important patterns related to impending eruptions. For the first time, we evaluate the performance of SOM and PCA on synthetic volcano seismic spectra constructed from observations during two well-studied eruptions at Klauea Volcano, Hawai'i, that include features observed in many volcanic settings. In particular, our objective is to test which of the techniques can best retrieve a set of three spectral patterns that we used to compose a synthetic spectrogram. We find that, without a priori knowledge of the given set of patterns, neither SOM nor PCA can directly recover the spectra. We thus test hierarchical clustering, a commonly used method, to investigate whether clustering in the space of the principal components and on the SOM, respectively, can retrieve the known patterns. Our clustering method applied to the SOM fails to detect the correct number and shape of the known input spectra. In contrast, clustering of the data reconstructed by the first three PCA modes reproduces these patterns and their occurrence in time more consistently. This result suggests that PCA in combination with hierarchical clustering is a powerful practical tool for automated identification of characteristic patterns in volcano seismic spectra. Our results indicate that, in contrast to PCA, common clustering algorithms may not be ideal to group patterns on the SOM and that it is crucial to evaluate the performance of these tools on a control dataset prior to their application to real data.

  15. A measure for objects clustering in principal component analysis biplot: A case study in inter-city buses maintenance cost data

    NASA Astrophysics Data System (ADS)

    Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.

    2017-03-01

    This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.

  16. Survey to Identify Substandard and Falsified Tablets in Several Asian Countries with Pharmacopeial Quality Control Tests and Principal Component Analysis of Handheld Raman Spectroscopy.

    PubMed

    Kakio, Tomoko; Nagase, Hitomi; Takaoka, Takashi; Yoshida, Naoko; Hirakawa, Junichi; Macha, Susan; Hiroshima, Takashi; Ikeda, Yukihiro; Tsuboi, Hirohito; Kimura, Kazuko

    2018-06-01

    The World Health Organization has warned that substandard and falsified medical products (SFs) can harm patients and fail to treat the diseases for which they were intended, and they affect every region of the world, leading to loss of confidence in medicines, health-care providers, and health systems. Therefore, development of analytical procedures to detect SFs is extremely important. In this study, we investigated the quality of pharmaceutical tablets containing the antihypertensive candesartan cilexetil, collected in China, Indonesia, Japan, and Myanmar, using the Japanese pharmacopeial analytical procedures for quality control, together with principal component analysis (PCA) of Raman spectrum obtained with handheld Raman spectrometer. Some samples showed delayed dissolution and failed to meet the pharmacopeial specification, whereas others failed the assay test. These products appeared to be substandard. Principal component analysis showed that all Raman spectra could be explained in terms of two components: the amount of the active pharmaceutical ingredient and the kinds of excipients. Principal component analysis score plot indicated one substandard, and the falsified tablets have similar principal components in Raman spectra, in contrast to authentic products. The locations of samples within the PCA score plot varied according to the source country, suggesting that manufacturers in different countries use different excipients. Our results indicate that the handheld Raman device will be useful for detection of SFs in the field. Principal component analysis of that Raman data clarify the difference in chemical properties between good quality products and SFs that circulate in the Asian market.

  17. Human operator performance of remotely controlled tasks: Teleoperator research conducted at NASA's George C. Marshall Space Flight Center. Executive summary

    NASA Technical Reports Server (NTRS)

    Shields, N., Jr.; Piccione, F.; Kirkpatrick, M., III; Malone, T. B.

    1982-01-01

    The combination of human and machine capabilities into an integrated engineering system which is complex and interactive interdisciplinary undertaking is discussed. Human controlled remote systems referred to as teleoperators, are reviewed. The human factors requirements for remotely manned systems are identified. The data were developed in three principal teleoperator laboratories and the visual, manipulator and mobility laboratories are described. Three major sections are identified: (1) remote system components, (2) human operator considerations; and (3) teleoperator system simulation and concept verification.

  18. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  19. A unified development of several techniques for the representation of random vectors and data sets

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  20. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  1. KSC-2011-7880

    NASA Image and Video Library

    2011-11-22

    CAPE CANAVERAL, Fla. – John Grotzinger, project scientist for Mars Science Laboratory (MSL) at the California Institute of Technology in Pasadena, Calif., demonstrates the operation of MSL's rover, Curiosity, during a science briefing at NASA's Kennedy Space Center in Florida, part of preflight activities for the MSL mission. Michael Malin, principal investigator for the Mast Camera and Mars Descent Imager investigations on Curiosity from Malin Space Science Systems, looks on at right. MSL’s components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station in Florida. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Kim Shiflett

  2. Supervisory Services Considered Desirable by Teachers and Principals in "Open Space" Elementary Schools.

    ERIC Educational Resources Information Center

    Kleparchuk, Harry

    The purpose of this study was to determine the nature of the supervisory functions that both teachers and principals of "open space" elementary schools in the Edmonton Public School System consider desirable in order to improve classroom instruction. A 77-item questionnaire was sent to the principals as well as to the 4th, 5th, and 6th…

  3. Restricted maximum likelihood estimation of genetic principal components and smoothed covariance matrices

    PubMed Central

    Meyer, Karin; Kirkpatrick, Mark

    2005-01-01

    Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566

  4. Recognition of units in coarse, unconsolidated braided-stream deposits from geophysical log data with principal components analysis

    USGS Publications Warehouse

    Morin, R.H.

    1997-01-01

    Returns from drilling in unconsolidated cobble and sand aquifers commonly do not identify lithologic changes that may be meaningful for Hydrogeologic investigations. Vertical resolution of saturated, Quaternary, coarse braided-slream deposits is significantly improved by interpreting natural gamma (G), epithermal neutron (N), and electromagnetically induced resistivity (IR) logs obtained from wells at the Capital Station site in Boise, Idaho. Interpretation of these geophysical logs is simplified because these sediments are derived largely from high-gamma-producing source rocks (granitics of the Boise River drainage), contain few clays, and have undergone little diagenesis. Analysis of G, N, and IR data from these deposits with principal components analysis provides an objective means to determine if units can be recognized within the braided-stream deposits. In particular, performing principal components analysis on G, N, and IR data from eight wells at Capital Station (1) allows the variable system dimensionality to be reduced from three to two by selecting the two eigenvectors with the greatest variance as axes for principal component scatterplots, (2) generates principal components with interpretable physical meanings, (3) distinguishes sand from cobble-dominated units, and (4) provides a means to distinguish between cobble-dominated units.

  5. Analysis and Evaluation of the Characteristic Taste Components in Portobello Mushroom.

    PubMed

    Wang, Jinbin; Li, Wen; Li, Zhengpeng; Wu, Wenhui; Tang, Xueming

    2018-05-10

    To identify the characteristic taste components of the common cultivated mushroom (brown; Portobello), Agaricus bisporus, taste components in the stipe and pileus of Portobello mushroom harvested at different growth stages were extracted and identified, and principal component analysis (PCA) and taste active value (TAV) were used to reveal the characteristic taste components during the each of the growth stages of Portobello mushroom. In the stipe and pileus, 20 and 14 different principal taste components were identified, respectively, and they were considered as the principal taste components of Portobello mushroom fruit bodies, which included most amino acids and 5'-nucleotides. Some taste components that were found at high levels, such as lactic acid and citric acid, were not detected as Portobello mushroom principal taste components through PCA. However, due to their high content, Portobello mushroom could be used as a source of organic acids. The PCA and TAV results revealed that 5'-GMP, glutamic acid, malic acid, alanine, proline, leucine, and aspartic acid were the characteristic taste components of Portobello mushroom fruit bodies. Portobello mushroom was also found to be rich in protein and amino acids, so it might also be useful in the formulation of nutraceuticals and functional food. The results in this article could provide a theoretical basis for understanding and regulating the characteristic flavor components synthesis process of Portobello mushroom. © 2018 Institute of Food Technologists®.

  6. Estimation of absolute solvent and solvation shell entropies via permutation reduction

    NASA Astrophysics Data System (ADS)

    Reinhard, Friedemann; Grubmüller, Helmut

    2007-01-01

    Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.

  7. Applications of principal component analysis to breath air absorption spectra profiles classification

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Y.

    2015-12-01

    The results of numerical simulation of application principal component analysis to absorption spectra of breath air of patients with pulmonary diseases are presented. Various methods of experimental data preprocessing are analyzed.

  8. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  9. On Using the Average Intercorrelation Among Predictor Variables and Eigenvector Orientation to Choose a Regression Solution.

    ERIC Educational Resources Information Center

    Mugrage, Beverly; And Others

    Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…

  10. A Note on McDonald's Generalization of Principal Components Analysis

    ERIC Educational Resources Information Center

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  11. Free-energy landscape, principal component analysis, and structural clustering to identify representative conformations from molecular dynamics simulations: the myoglobin case.

    PubMed

    Papaleo, Elena; Mereghetti, Paolo; Fantucci, Piercarlo; Grandori, Rita; De Gioia, Luca

    2009-01-01

    Several molecular dynamics (MD) simulations were used to sample conformations in the neighborhood of the native structure of holo-myoglobin (holo-Mb), collecting trajectories spanning 0.22 micros at 300 K. Principal component (PCA) and free-energy landscape (FEL) analyses, integrated by cluster analysis, which was performed considering the position and structures of the individual helices of the globin fold, were carried out. The coherence between the different structural clusters and the basins of the FEL, together with the convergence of parameters derived by PCA indicates that an accurate description of the Mb conformational space around the native state was achieved by multiple MD trajectories spanning at least 0.14 micros. The integration of FEL, PCA, and structural clustering was shown to be a very useful approach to gain an overall view of the conformational landscape accessible to a protein and to identify representative protein substates. This method could be also used to investigate the conformational and dynamical properties of Mb apo-, mutant, or delete versions, in which greater conformational variability is expected and, therefore identification of representative substates from the simulations is relevant to disclose structure-function relationship.

  12. CLUSFAVOR 5.0: hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles

    PubMed Central

    Peterson, Leif E

    2002-01-01

    CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816

  13. Using Graph Components Derived from an Associative Concept Dictionary to Predict fMRI Neural Activation Patterns that Represent the Meaning of Nouns.

    PubMed

    Akama, Hiroyuki; Miyake, Maki; Jung, Jaeyoung; Murphy, Brian

    2015-01-01

    In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.

  14. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  15. Beyond the Normalized Difference Vegetation Index (NDVI): Developing a Natural Space Index for population-level health research.

    PubMed

    Rugel, Emily J; Henderson, Sarah B; Carpiano, Richard M; Brauer, Michael

    2017-11-01

    Natural spaces can provide psychological benefits to individuals, but population-level epidemiologic studies have produced conflicting results. Refining current exposure-assessment methods is necessary to advance our understanding of population health and to guide the design of health-promoting urban forms. The aim of this study was to develop a comprehensive Natural Space Index that robustly models potential exposure based on the presence, form, accessibility, and quality of multiple forms of greenspace (e.g., parks and street trees) and bluespace (e.g., oceans and lakes). The index was developed for greater Vancouver, Canada. Greenness presence was derived from remote sensing (NDVI/EVI); forms were extracted from municipal and private databases; and accessibility was based on restrictions such as private ownership. Quality appraisals were conducted for 200 randomly sampled parks using the Public Open Space Desktop Appraisal Tool (POSDAT). Integrating these measures in GIS, exposure was assessed for 60,242 postal codes using 100- to 1,600-m buffers based on hypothesized pathways to mental health. A single index was then derived using principal component analysis (PCA). Comparing NDVI with alternate approaches for assessing natural space resulted in widely divergent results, with quintile rankings shifting for 22-88% of postal codes, depending on the measure. Overall park quality was fairly low (mean of 15 on a scale of 0-45), with no significant difference seen by neighborhood-level household income. The final PCA identified three main sets of variables, with the first two components explaining 68% of the total variance. The first component was dominated by the percentages of public and private greenspace and bluespace and public greenspace within 250m, while the second component was driven by lack of access to bluespace within 1 km. Many current approaches to modeling natural space may misclassify exposures and have limited specificity. The Natural Space Index represents a novel approach at a regional scale with application to urban planning and policy-making. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. New protocol for dissociating visuospatial working memory ability in reaching space and in navigational space.

    PubMed

    Lupo, Michela; Ferlazzo, Fabio; Aloise, Fabio; Di Nocera, Francesco; Tedesco, Anna Maria; Cardillo, Chiara; Leggio, Maria

    2018-04-27

    Several studies have demonstrated that the processing of visuospatial memory for locations in reaching space and in navigational space is supported by independent systems, and that the coding of visuospatial information depends on the modality of the presentation (i.e., sequential or simultaneous). However, these lines of evidence and the most common neuropsychological tests used by clinicians to investigate visuospatial memory have several limitations (e.g., they are unable to analyze all the subcomponents of this function and are not directly comparable). Therefore, we developed a new battery of tests that is able to investigate these subcomponents. We recruited 71 healthy subjects who underwent sequential and simultaneous navigational tests by using an innovative sensorized platform, as well as comparable paper tests to evaluate the same components in reaching space (Exp. 1). Consistent with the literature, the principal-component method of analysis used in this study demonstrated the presence of distinct memory for sequences in different portions of space, but no distinction was found for simultaneous presentation, suggesting that different modalities of eye gaze exploration are used when subjects have to perform different types of tasks. For this purpose, an infrared Tobii Eye-Tracking X50 system was used in both spatial conditions (Exp. 2), showing that a clear effect of the presentation modality was due to the specific strategy used by subjects to explore the stimuli in space. Given these findings, the neuropsychological battery established in the present study allows us to show basic differences in the normal coding of stimuli, which can explain the specific visuospatial deficits found in various neurological conditions.

  17. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    PubMed

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  18. Design, Analysis and Fabrication of Secondary Structural Components for the Habitat Demonstration Unit-Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Smith, Russell W.; Langford, William M.

    2012-01-01

    In support of NASA s Habitat Demonstration Unit - Deep Space Habitat Prototype, a number of evolved structural sections were designed, fabricated, analyzed and installed in the 5 meter diameter prototype. The hardware consisted of three principal structural sections, and included the development of novel fastener insert concepts. The articles developed consisted of: 1) 1/8th of the primary flooring section, 2) an inner radius floor beam support which interfaced with, and supported (1), 3) two upper hatch section prototypes, and 4) novel insert designs for mechanical fastener attachments. Advanced manufacturing approaches were utilized in the fabrication of the components. The structural components were developed using current commercial aircraft constructions as a baseline (for both the flooring components and their associated mechanical fastener inserts). The structural sections utilized honeycomb sandwich panels. The core section consisted of 1/8th inch cell size Nomex, at 9 lbs/cu ft, and which was 0.66 inches thick. The facesheets had 3 plys each, with a thickness of 0.010 inches per ply, made from woven E-glass with epoxy reinforcement. Analysis activities consisted of both analytical models, as well as initial closed form calculations. Testing was conducted to help verify analysis model inputs, as well as to facilitate correlation between testing and analysis. Test activities consisted of both 4 point bending tests as well as compressive core crush sequences. This paper presents an overview of this activity, and discusses issues encountered during the various phases of the applied research effort, and its relevance to future space based habitats.

  19. Transport in the Subtropical Lowermost Stratosphere during CRYSTAL-FACE

    NASA Technical Reports Server (NTRS)

    Pittman, Jasna V.; Weinstock, elliot M.; Oglesby, Robert J.; Sayres, David S.; Smith, Jessica B.; Anderson, James G.; Cooper, Owen R.; Wofsy, Steven C.; Xueref, Irene; Gerbig, Cristoph; hide

    2007-01-01

    We use in situ measurements of water vapor (H2O), ozone (O3), carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), and total reactive nitrogen (NO(y)) obtained during the CRYSTAL-FACE campaign in July 2002 to study summertime transport in the subtropical lowermost stratosphere. We use an objective methodology to distinguish the latitudinal origin of the sampled air masses despite the influence of convection, and we calculate backward trajectories to elucidate their recent geographical history. The methodology consists of exploring the statistical behavior of the data by performing multivariate clustering and agglomerative hierarchical clustering calculations, and projecting cluster groups onto principal component space to identify air masses of like composition and hence presumed origin. The statistically derived cluster groups are then examined in physical space using tracer-tracer correlation plots. Interpretation of the principal component analysis suggests that the variability in the data is accounted for primarily by the mean age of air in the stratosphere, followed by the age of the convective influence, and lastly by the extent of convective influence, potentially related to the latitude of convective injection [Dessler and Sherwuud, 2004]. We find that high-latitude stratospheric air is the dominant source region during the beginning of the campaign while tropical air is the dominant source region during the rest of the campaign. Influence of convection from both local and non-local events is frequently observed. The identification of air mass origin is confirmed with backward trajectories, and the behavior of the trajectories is associated with the North American monsoon circulation.

  20. Online monitoring of coffee roasting by proton transfer reaction time-of-flight mass spectrometry (PTR-ToF-MS): towards a real-time process control for a consistent roast profile.

    PubMed

    Wieland, Flurin; Gloess, Alexia N; Keller, Marco; Wetzel, Andreas; Schenker, Stefan; Yeretzian, Chahan

    2012-03-01

    A real-time automated process control tool for coffee roasting is presented to consistently and accurately achieve a targeted roast degree. It is based on the online monitoring of volatile organic compounds (VOC) in the off-gas of a drum roaster by proton transfer reaction time-of-flight mass spectrometry at a high time (1 Hz) and mass resolution (5,500 m/Δm at full width at half-maximum) and high sensitivity (better than parts per billion by volume). Forty-two roasting experiments were performed with the drum roaster being operated either on a low, medium or high hot-air inlet temperature (= energy input) and the coffee (Arabica from Antigua, Guatemala) being roasted to low, medium or dark roast degrees. A principal component analysis (PCA) discriminated, for each one of the three hot-air inlet temperatures, the roast degree with a resolution of better than ±1 Colorette. The 3D space of the three first principal components was defined based on 23 mass spectral profiles of VOCs and their roast degree at the end point of roasting. This provided a very detailed picture of the evolution of the roasting process and allowed establishment of a predictive model that projects the online-monitored VOC profile of the roaster off-gas in real time onto the PCA space defined by the calibration process and, ultimately, to control the coffee roasting process so as to achieve a target roast degree and a consistent roasting.

  1. Transport in the Subtropical Lowermost Stratosphere during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers-Florida Area Cirrus Experiment

    NASA Technical Reports Server (NTRS)

    Pittman, Jasna V.; Weinstock, Elliot M.; Oglesby, Robert J.; Sayres, David S.; Smith, Jessica B.; Anderson, James G.; Cooper, Owen R.; Wofsy, Steven C.; Xueref, Irene; Gerbig, Cristoph; hide

    2007-01-01

    We use in situ measurements of water vapor (H2O), ozone (O3), carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), and total reactive nitrogen (NOy) obtained during the CRYSTAL-FACE campaign in July 2002 to study summertime transport in the subtropical lowermost stratosphere. We use an objective methodology to distinguish the latitudinal origin of the sampled air masses despite the influence of convection, and we calculate backward trajectories to elucidate their recent geographical history. The methodology consists of exploring the statistical behavior of the data by performing multivariate clustering and agglomerative hierarchical clustering calculations and projecting cluster groups onto principal component space to identify air masses of like composition and hence presumed origin. The statistically derived cluster groups are then examined in physical space using tracer-tracer correlation plots. Interpretation of the principal component analysis suggests that the variability in the data is accounted for primarily by the mean age of air in the stratosphere, followed by the age of the convective influence, and last by the extent of convective influence, potentially related to the latitude of convective injection (Dessler and Sherwood, 2004). We find that high-latitude stratospheric air is the dominant source region during the beginning of the campaign while tropical air is the dominant source region during the rest of the campaign. Influence of convection from both local and nonlocal events is frequently observed. The identification of air mass origin is confirmed with backward trajectories, and the behavior of the trajectories is associated with the North American monsoon circulation.

  2. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  3. Pepper seed variety identification based on visible/near-infrared spectral technology

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Meng, Zhijun; Fan, Pengfei; Cai, Jichen

    2016-11-01

    Pepper is a kind of important fruit vegetable, with the expansion of pepper hybrid planting area, detection of pepper seed purity is especially important. This research used visible/near infrared (VIS/NIR) spectral technology to detect the variety of single pepper seed, and chose hybrid pepper seeds "Zhuo Jiao NO.3", "Zhuo Jiao NO.4" and "Zhuo Jiao NO.5" as research sample. VIS/NIR spectral data of 80 "Zhuo Jiao NO.3", 80 "Zhuo Jiao NO.4" and 80 "Zhuo Jiao NO.5" pepper seeds were collected, and the original spectral data was pretreated with standard normal variable (SNV) transform, first derivative (FD), and Savitzky-Golay (SG) convolution smoothing methods. Principal component analysis (PCA) method was adopted to reduce the dimension of the spectral data and extract principal components, according to the distribution of the first principal component (PC1) along with the second principal component(PC2) in the twodimensional plane, similarly, the distribution of PC1 coupled with the third principal component(PC3), and the distribution of PC2 combined with PC3, distribution areas of three varieties of pepper seeds were divided in each twodimensional plane, and the discriminant accuracy of PCA was tested through observing the distribution area of samples' principal components in validation set. This study combined PCA and linear discriminant analysis (LDA) to identify single pepper seed varieties, results showed that with the FD preprocessing method, the discriminant accuracy of pepper seed varieties was 98% for validation set, it concludes that using VIS/NIR spectral technology is feasible for identification of single pepper seed varieties.

  4. Analysis of environmental variation in a Great Plains reservoir using principal components analysis and geographic information systems

    USGS Publications Warehouse

    Long, J.M.; Fisher, W.L.

    2006-01-01

    We present a method for spatial interpretation of environmental variation in a reservoir that integrates principal components analysis (PCA) of environmental data with geographic information systems (GIS). To illustrate our method, we used data from a Great Plains reservoir (Skiatook Lake, Oklahoma) with longitudinal variation in physicochemical conditions. We measured 18 physicochemical features, mapped them using GIS, and then calculated and interpreted four principal components. Principal component 1 (PC1) was readily interpreted as longitudinal variation in water chemistry, but the other principal components (PC2-4) were difficult to interpret. Site scores for PC1-4 were calculated in GIS by summing weighted overlays of the 18 measured environmental variables, with the factor loadings from the PCA as the weights. PC1-4 were then ordered into a landscape hierarchy, an emergent property of this technique, which enabled their interpretation. PC1 was interpreted as a reservoir scale change in water chemistry, PC2 was a microhabitat variable of rip-rap substrate, PC3 identified coves/embayments and PC4 consisted of shoreline microhabitats related to slope. The use of GIS improved our ability to interpret the more obscure principal components (PC2-4), which made the spatial variability of the reservoir environment more apparent. This method is applicable to a variety of aquatic systems, can be accomplished using commercially available software programs, and allows for improved interpretation of the geographic environmental variability of a system compared to using typical PCA plots. ?? Copyright by the North American Lake Management Society 2006.

  5. Principal Investigator Microgravity Services Role in ISS Acceleration Data Distribution

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin

    1999-01-01

    Measurement of the microgravity acceleration environment on the International Space Station will be accomplished by two accelerometer systems. The Microgravity Acceleration Measurement System will record the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime comprised of vehicle, crew, and equipment disturbances will be accomplished by the Space Acceleration Measurement System-II. Due to the dynamic nature of the microgravity environment and its potential to influence sensitive experiments, Principal Investigators require distribution of microgravity acceleration in a timely and straightforward fashion. In addition to this timely distribution of the data, long term access to International Space Station microgravity environment acceleration data is required. The NASA Glenn Research Center's Principal Investigator Microgravity Services project will provide the means for real-time and post experiment distribution of microgravity acceleration data to microgravity science Principal Investigators. Real-time distribution of microgravity environment acceleration data will be accomplished via the World Wide Web. Data packets from the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System-II will be routed from onboard the International Space Station to the NASA Glenn Research Center's Telescience Support Center. Principal Investigator Microgravity Services' ground support equipment located at the Telescience Support Center will be capable of generating a standard suite of acceleration data displays, including various time domain and frequency domain options. These data displays will be updated in real-time and will periodically update images available via the Principal Investigator Microgravity Services web page.

  6. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  7. Factors associated with successful transition among children with disabilities in eight European countries

    PubMed Central

    2017-01-01

    Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning. PMID:28636649

  8. Factors associated with successful transition among children with disabilities in eight European countries.

    PubMed

    Ravenscroft, John; Wazny, Kerri; Davis, John M

    2017-01-01

    This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), 'child inclusive ethos,' contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43-7.18, p<0.0001). To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.

  9. Patient phenotypes associated with outcomes after aneurysmal subarachnoid hemorrhage: a principal component analysis.

    PubMed

    Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch

    2014-03-01

    Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.

  10. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  11. A Wideband Autonomous Cognitive Radio Development and Prototyping System

    DTIC Science & Technology

    2017-11-14

    Gain, High Frequency , Circularly Polarized Planar Antenna Arrays for Space Applications”, NASA. 3. C. G. Christodoulou (Co-Principal Investigator...Investigator), “Cognitive Communications for SATCOM”, Space Vehicles (RV) University Grants Program, 04/26/16-04/25/17 ($150K), Air Force Research...Aerospace (Prime Contractor). 2. S. K. Jayaweera (Principal Investigator), “Cognitive Communications for SATCOM”, Space Vehicles (RV) University Grants

  12. Improving the hospital 'soundscape': a framework to measure individual perceptual response to hospital sounds.

    PubMed

    Mackrill, J B; Jennings, P A; Cain, R

    2013-01-01

    Work on the perception of urban soundscapes has generated a number of perceptual models which are proposed as tools to test and evaluate soundscape interventions. However, despite the excessive sound levels and noise within hospital environments, perceptual models have not been developed for these spaces. To address this, a two-stage approach was developed by the authors to create such a model. First, semantics were obtained from listening evaluations which captured the feelings of individuals from hearing hospital sounds. Then, 30 participants rated a range of sound clips representative of a ward soundscape based on these semantics. Principal component analysis extracted a two-dimensional space representing an emotional-cognitive response. The framework enables soundscape interventions to be tested which may improve the perception of these hospital environments.

  13. Direct reconstruction of dark energy.

    PubMed

    Clarkson, Chris; Zunckel, Caroline

    2010-05-28

    An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.

  14. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  15. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  16. A Guide to the Application of Probability Risk Assessment Methodology and Hazard Risk Frequency Criteria as a Hazard Control for the Use of the Mobile Servicing System on the International Space Station

    NASA Astrophysics Data System (ADS)

    D'silva, Oneil; Kerrison, Roger

    2013-09-01

    A key feature for the increased utilization of space robotics is to automate Extra-Vehicular manned space activities and thus significantly reduce the potential for catastrophic hazards while simultaneously minimizing the overall costs associated with manned space. The principal scope of the paper is to evaluate the use of industry standard accepted Probability risk/safety assessment (PRA/PSA) methodologies and Hazard Risk frequency Criteria as a hazard control. This paper illustrates the applicability of combining the selected Probability risk assessment methodology and hazard risk frequency criteria, in order to apply the necessary safety controls that allow for the increased use of the Mobile Servicing system (MSS) robotic system on the International Space Station. This document will consider factors such as component failure rate reliability, software reliability, and periods of operation and dormancy, fault tree analyses and their effects on the probability risk assessments. The paper concludes with suggestions for the incorporation of existing industry Risk/Safety plans to create an applicable safety process for future activities/programs

  17. Exploring space-time structure of human mobility in urban space

    NASA Astrophysics Data System (ADS)

    Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.

    2011-03-01

    Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.

  18. The nature of the mineral component of bone and the mechanism of calcification.

    PubMed

    Glimcher, M J

    1987-01-01

    From the physical chemical standpoint, the formation of a solid phase of Ca-P in bone represents a phase transformation, a process exemplified by the formation of ice from water. Considering the structural complexity and abundance of highly organized macromolecules in the cells and extracellular tissue spaces of mineralized tissues generally and in bone particularly, it is inconceivable that this phase transformation occurs by homogeneous nucleation, i.e., without the active participation of an organic component acting as a nucleator. This is almost surely true in biologic mineralization in general. Electron micrographs and low-angle neutron and X-ray diffraction studies clearly show that calcification of collagen fibrils occurs in an extremely intimate and highly organized fashion: initiation of crystal formation within the collagen fibrils in the hole zone region, with the long axes (c-axis) of the crystals aligned roughly parallel to the long axis of the fibril within which they are located. Crystals are initially formed in hole zone regions within individual fibrils separated by unmineralized regions. Calcification is initiated in spatially distinct nucleation sites. This indicates that such regions within a single, undirectional fibril represents independent sites for heterogeneous nucleation. Clearly, sites where mineralization is initiated in adjacent collagen fibrils are even further separated, emphasizing even more clearly that the process of progressive calcification of the collagen fibrils and therefore of the tissue is characterized principally by the presence of increasing numbers of independent nucleation sites within additional hole zone regions of the collagen fibrils. The increase in the mass of Ca-P apatite accrues principally by multiplication of more crystals, mostly by secondary nucleation from the crystals initially deposited in the hole zone region. Very little additional growth of the crystals occurs with time, the additional increase in mineral mass being principally the result of increase in the number of crystals (multiplication), not size of the crystals (crystal growth). The crystals within the collagen fibers grow in number and possibly in size to extend into the overlap zone of the collagen fibrils ("pores") so that all of the available space within the fibrils, which has possibly expanded in volume from its uncalcified level, is eventually occupied by the mineral crystals. It must be recognized that the calcification of separate tissue components and compartments (collagen, mitochondria, matrix vesicles) must be an independent physical chemical event.(ABSTRACT TRUNCATED AT 400 WORDS)

  19. Chemical imaging of molecular changes in a hydrated single cell by dynamic secondary ion mass spectrometry and super-resolution microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, Xin; Szymanski, Craig; Wang, Zhaoying

    2016-01-01

    Chemical imaging of single cells is important in capturing biological dynamics. Single cell correlative imaging is realized between structured illumination microscopy (SIM) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) using System for Analysis at the Liquid Vacuum Interface (SALVI), a multimodal microreactor. SIM characterized cells and guided subsequent ToF-SIMS analysis. Dynamic ToF-SIMS provided time- and space-resolved cell molecular mapping. Lipid fragments were identified in the hydrated cell membrane. Principal component analysis was used to elucidate chemical component differences among mouse lung cells that uptake zinc oxide nanoparticles. Our results provided submicron chemical spatial mapping for investigations of cell dynamics atmore » the molecular level.« less

  20. A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series

    NASA Astrophysics Data System (ADS)

    Rovira, F.; Palau, J. L.; Millán, M.

    2009-09-01

    Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).

  1. Introduction to uses and interpretation of principal component analyses in forest biology.

    Treesearch

    J. G. Isebrands; Thomas R. Crow

    1975-01-01

    The application of principal component analysis for interpretation of multivariate data sets is reviewed with emphasis on (1) reduction of the number of variables, (2) ordination of variables, and (3) applications in conjunction with multiple regression.

  2. Principal component analysis of phenolic acid spectra

    USDA-ARS?s Scientific Manuscript database

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  3. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  4. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  5. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    NASA Astrophysics Data System (ADS)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  6. Assessment of Supportive, Conflicted, and Controlling Dimensions of Family Functioning: A Principal Components Analysis of Family Environment Scale Subscales in a College Sample.

    ERIC Educational Resources Information Center

    Kronenberger, William G.; Thompson, Robert J., Jr.; Morrow, Catherine

    1997-01-01

    A principal components analysis of the Family Environment Scale (FES) (R. Moos and B. Moos, 1994) was performed using 113 undergraduates. Research supported 3 broad components encompassing the 10 FES subscales. These results supported previous research and the generalization of the FES to college samples. (SLD)

  7. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  8. Burst and Principal Components Analyses of MEA Data Separates Chemicals by Class

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in action potential "spikes" in neuronal networks and can be used to screen chemicals for neurotoxicity. Analytical "fingerprinting," using Principal Components Analysis (PCA) on spike trains recorded from prim...

  9. A comparison of linear approaches to filter out environmental effects in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Deraemaeker, A.; Worden, K.

    2018-05-01

    This paper discusses the possibility of using the Mahalanobis squared-distance to perform robust novelty detection in the presence of important environmental variability in a multivariate feature vector. By performing an eigenvalue decomposition of the covariance matrix used to compute that distance, it is shown that the Mahalanobis squared-distance can be written as the sum of independent terms which result from a transformation from the feature vector space to a space of independent variables. In general, especially when the size of the features vector is large, there are dominant eigenvalues and eigenvectors associated with the covariance matrix, so that a set of principal components can be defined. Because the associated eigenvalues are high, their contribution to the Mahalanobis squared-distance is low, while the contribution of the other components is high due to the low value of the associated eigenvalues. This analysis shows that the Mahalanobis distance naturally filters out the variability in the training data. This property can be used to remove the effect of the environment in damage detection, in much the same way as two other established techniques, principal component analysis and factor analysis. The three techniques are compared here using real experimental data from a wooden bridge for which the feature vector consists in eigenfrequencies and modeshapes collected under changing environmental conditions, as well as damaged conditions simulated with an added mass. The results confirm the similarity between the three techniques and the ability to filter out environmental effects, while keeping a high sensitivity to structural changes. The results also show that even after filtering out the environmental effects, the normality assumption cannot be made for the residual feature vector. An alternative is demonstrated here based on extreme value statistics which results in a much better threshold which avoids false positives in the training data, while allowing detection of all damaged cases.

  10. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    NASA Astrophysics Data System (ADS)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  11. Short communication: Discrimination between retail bovine milks with different fat contents using chemometrics and fatty acid profiling.

    PubMed

    Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar

    2017-06-01

    We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Use of multivariate statistics to identify unreliable data obtained using CASA.

    PubMed

    Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón

    2013-06-01

    In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.

  13. [Spatial distribution characteristics of the physical and chemical properties of water in the Kunes River after the supply of snowmelt during spring].

    PubMed

    Liu, Xiang; Guo, Ling-Peng; Zhang, Fei-Yun; Ma, Jie; Mu, Shu-Yong; Zhao, Xin; Li, Lan-Hai

    2015-02-01

    Eight physical and chemical indicators related to water quality were monitored from nineteen sampling sites along the Kunes River at the end of snowmelt season in spring. To investigate the spatial distribution characteristics of water physical and chemical properties, cluster analysis (CA), discriminant analysis (DA) and principal component analysis (PCA) are employed. The result of cluster analysis showed that the Kunes River could be divided into three reaches according to the similarities of water physical and chemical properties among sampling sites, representing the upstream, midstream and downstream of the river, respectively; The result of discriminant analysis demonstrated that the reliability of such a classification was high, and DO, Cl- and BOD5 were the significant indexes leading to this classification; Three principal components were extracted on the basis of the principal component analysis, in which accumulative variance contribution could reach 86.90%. The result of principal component analysis also indicated that water physical and chemical properties were mostly affected by EC, ORP, NO3(-) -N, NH4(+) -N, Cl- and BOD5. The sorted results of principal component scores in each sampling sites showed that the water quality was mainly influenced by DO in upstream, by pH in midstream, and by the rest of indicators in downstream. The order of comprehensive scores for principal components revealed that the water quality degraded from the upstream to downstream, i.e., the upstream had the best water quality, followed by the midstream, while the water quality at downstream was the worst. This result corresponded exactly to the three reaches classified using cluster analysis. Anthropogenic activity and the accumulation of pollutants along the river were probably the main reasons leading to this spatial difference.

  14. Evidence for age-associated disinhibition of the wake drive provided by scoring principal components of the resting EEG spectrum in sleep-provoking conditions.

    PubMed

    Putilov, Arcady A; Donskaya, Olga G

    2016-01-01

    Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.

  15. Asteroid age distributions determined by space weathering and collisional evolution models

    NASA Astrophysics Data System (ADS)

    Willman, Mark; Jedicke, Robert

    2011-01-01

    We provide evidence of consistency between the dynamical evolution of main belt asteroids and their color evolution due to space weathering. The dynamical age of an asteroid's surface (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H. [2005]. Icarus 175 (1), 111-140; Nesvorný, D., Jedicke, R., Whiteley, R.J., Ivezić, Ž. [2005]. Icarus 173, 132-152) is the time since its last catastrophic disruption event which is a function of the object's diameter. The age of an S-complex asteroid's surface may also be determined from its color using a space weathering model (e.g. Willman, M., Jedicke, R., Moskovitz, N., Nesvorný, D., Vokrouhlický, D., Mothé-Diniz, T. [2010]. Icarus 208, 758-772; Jedicke, R., Nesvorný, D., Whiteley, R.J., Ivezić, Ž., Jurić, M. [2004]. Nature 429, 275-277; Willman, M., Jedicke, R., Nesvorny, D., Moskovitz, N., Ivezić, Ž., Fevig, R. [2008]. Icarus 195, 663-673. We used a sample of 95 S-complex asteroids from SMASS and obtained their absolute magnitudes and u, g, r, i, z filter magnitudes from SDSS. The absolute magnitudes yield a size-derived age distribution. The u, g, r, i, z filter magnitudes lead to the principal component color which yields a color-derived age distribution by inverting our color-age relationship, an enhanced version of the 'dual τ' space weathering model of Willman et al. (2010). We fit the size-age distribution to the enhanced dual τ model and found characteristic weathering and gardening times of τw = 2050 ± 80 Myr and τg=4400-500+700Myr respectively. The fit also suggests an initial principal component color of -0.05 ± 0.01 for fresh asteroid surface with a maximum possible change of the probable color due to weathering of Δ PC = 1.34 ± 0.04. Our predicted color of fresh asteroid surface matches the color of fresh ordinary chondritic surface of PC1 = 0.17 ± 0.39.

  16. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    NASA Astrophysics Data System (ADS)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity were distinguished: hypsometric component (PC1), deciduous forest habitats component (PC2), river valleys and alder habitats component (PC3), and lakes component (PC4). The distinguished factors characterise natural qualities of postglacial area and reflect well the role of the four most important groups of environment components in shaping ecodiversity of the area under study. The map of ecodiversity of Debnica Kaszubska commune was created on the basis of the first four principal component scores and then five classes of diversity were isolated: very low, low, average, high and very high. As a result of the assessment, five commune regions of very high ecodiversity were separated. These regions are also very attractive for tourists and valuable in terms of their rich nature which include protected areas such as Slupia Valley Landscape Park. The suggested method of ecodiversity assessment with the use of principal component analysis may constitute an alternative methodological proposition to other research methods used so far. Literature Jedicke E., 2001. Biodiversität, Geodiversität, Ökodiversität. Kriterien zur Analyse der Landschaftsstruktur - ein konzeptioneller Diskussionsbeitrag. Naturschutz und Landschaftsplanung, 33(2/3), 59-68.

  17. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  18. Rosacea assessment by erythema index and principal component analysis segmentation maps

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Rubins, Uldis; Saknite, Inga; Spigulis, Janis

    2017-12-01

    RGB images of rosacea were analyzed using segmentation maps of principal component analysis (PCA) and erythema index (EI). Areas of segmented clusters were compared to Clinician's Erythema Assessment (CEA) values given by two dermatologists. The results show that visible blood vessels are segmented more precisely on maps of the erythema index and the third principal component (PC3). In many cases, a distribution of clusters on EI and PC3 maps are very similar. Mean values of clusters' areas on these maps show a decrease of the area of blood vessels and erythema and an increase of lighter skin area after the therapy for the patients with diagnosis CEA = 2 on the first visit and CEA=1 on the second visit. This study shows that EI and PC3 maps are more useful than the maps of the first (PC1) and second (PC2) principal components for indicating vascular structures and erythema on the skin of rosacea patients and therapy monitoring.

  19. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  20. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  1. [Content of mineral elements of Gastrodia elata by principal components analysis].

    PubMed

    Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei

    2015-03-01

    To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.

  2. Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI

    PubMed Central

    Pearson, William G.; Zumwalt, Ann C.

    2013-01-01

    Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608

  3. Microgravity

    NASA Image and Video Library

    2000-04-14

    Representatives of NASA materials science experiments supported the NASA exhibit at the Rernselaer Polytechnic Institute's Space Week activities, April 5 through 11, 1999. From left to right are: Angie Jackman, project manager at NASA's Marshall Space Flight Center for dendritic growth experiments; Dr. Martin Glicksman of Rennselaer Polytechnic Instutute, Troy, NY, principal investigator on the Isothermal Dendritic Growth Experiment (IDGE) that flew three times on the Space Shuttle; and Dr. Matthew Koss of College of the Holy Cross in Worcester, MA, a co-investigator on the IDGE and now principal investigator on the Transient Dendritic Solidification Experiment being developed for the International Space Station (ISS). The image at far left is a dendrite grown in Glicksman's IDGE tests aboard the Shuttle. Glicksman is also principal investigator for the Evolution of Local Microstructures: Spatial Instabilities of Coarsening Clusters.

  4. Obesity, metabolic syndrome, impaired fasting glucose, and microvascular dysfunction: a principal component analysis approach.

    PubMed

    Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G

    2012-11-13

    We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.

  5. The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.

    PubMed

    Bagley, C

    1980-03-01

    The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.

  6. The Derivation of Job Compensation Index Values from the Position Analysis Questionnaire (PAQ). Report No. 6.

    ERIC Educational Resources Information Center

    McCormick, Ernest J.; And Others

    The study deals with the job component method of establishing compensation rates. The basic job analysis questionnaire used in the study was the Position Analysis Questionnaire (PAQ) (Form B). On the basis of a principal components analysis of PAQ data for a large sample (2,688) of jobs, a number of principal components (job dimensions) were…

  7. Perceptions of the Principal Evaluation Process and Performance Criteria: A Qualitative Study of the Challenge of Principal Evaluation

    ERIC Educational Resources Information Center

    Faginski-Stark, Erica; Casavant, Christopher; Collins, William; McCandless, Jason; Tencza, Marilyn

    2012-01-01

    Recent federal and state mandates have tasked school systems to move beyond principal evaluation as a bureaucratic function and to re-imagine it as a critical component to improve principal performance and compel school renewal. This qualitative study investigated the district leaders' and principals' perceptions of the performance evaluation…

  8. 2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.

    PubMed

    Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen

    2017-09-19

    A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.

  9. Hubble Space Telescope STIS Observations of the Wolf-Rayet Star HD 5980 in the Small Magellanic Cloud. II. The Interstellar Medium Components

    NASA Astrophysics Data System (ADS)

    Koenigsberger, Gloria; Georgiev, Leonid; Peimbert, Manuel; Walborn, Nolan R.; Barbá, Rodolfo; Niemela, Virpi S.; Morrell, Nidia; Tsvetanov, Zlatan; Schulte-Ladbeck, Regina

    2001-01-01

    Observations of the interstellar and circumstellar absorption components obtained with the Hubble Space Telescope Space Telescope Imaging Spectrograph (STIS) along the line of sight toward the Wolf-Rayet-luminous blue variable (LBV) system HD 5980 in the Small Magellanic Cloud are analyzed. Velocity components from C I, C I*, C II, C II*, C IV, N I, N V, O I, Mg II, Al II, Si II, Si II*, Si III, Si IV, S II, S III, Fe II, Ni II, Be I, Cl I, and CO are identified, and column densities estimated. The principal velocity systems in our data are (1) interstellar medium (ISM) components in the Galactic disk and halo (Vhel=1.1+/-3, 9+/-2 km s-1) (2) ISM components in the SMC (Vhel=+87+/-6, +110+/-6, +132+/-6, +158+/-8, +203+/-15 km s-1) (3) SMC supernova remnant SNR 0057-7226 components (Vhel=+312+/-3, +343+/-3, +33, +64 km s-1) (4) circumstellar (CS) velocity systems (Vhel=-1020, -840, -630, -530, -300 km s-1) and (5) a possible system at -53+/-5 km s-1 (seen only in some of the Si II lines and marginally in Fe II) of uncertain origin. The supernova remnant SNR 0057-7226 has a systemic velocity of +188 km s-1, suggesting that its progenitor was a member of the NGC 346 cluster. Our data allow estimates to be made of Te~40,000 K, ne~100 cm-3, N(H)~(4-12)×1018 cm-2 and a total mass between 400 and 1000 Msolar for the supernova remnant (SNR) shell. We detect C I absorption lines primarily in the +132 and +158 km s-1 SMC velocity systems. As a result of the LBV-type eruptions in HD 5980, a fast-wind/slow-wind circumstellar interaction region has appeared, constituting the earliest formation stages of a windblown H II bubble surrounding this system. Variations over a timescale of 1 year in this circumstellar structure are detected. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  10. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we introduce the vbICA technique and present its application on synthetic data that simulate a GPS network recording ground deformation in a tectonically active region, with synthetic time-series containing interseismic, coseismic, and postseismic deformation, plus seasonal deformation, and white and coloured noise. We study the ability of the algorithm to recover the original (known) sources of deformation, and then apply it to a real scenario: the Emilia seismic sequence (2012, northern Italy), which is an example of seismic sequence occurred in a slowly converging tectonic setting, characterized by several local to regional anthropogenic or natural sources of deformation, mainly subsidence due to fluid withdrawal and sediments compaction. We apply both PCA and vbICA to displacement time-series recorded by continuous GPS and InSAR (Pezzo et al., EGU2015-8950).

  11. Effect of noise in principal component analysis with an application to ozone pollution

    NASA Astrophysics Data System (ADS)

    Tsakiri, Katerina G.

    This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction

  12. Experimental Researches on the Durability Indicators and the Physiological Comfort of Fabrics using the Principal Component Analysis (PCA) Method

    NASA Astrophysics Data System (ADS)

    Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.

    2017-06-01

    The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.

  13. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  14. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  15. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    ERIC Educational Resources Information Center

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  16. Analysis of the principal component algorithm in phase-shifting interferometry.

    PubMed

    Vargas, J; Quiroga, J Antonio; Belenguer, T

    2011-06-15

    We recently presented a new asynchronous demodulation method for phase-sampling interferometry. The method is based in the principal component analysis (PCA) technique. In the former work, the PCA method was derived heuristically. In this work, we present an in-depth analysis of the PCA demodulation method.

  17. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  18. Burst and Principal Components Analyses of MEA Data for 16 Chemicals Describe at Least Three Effects Classes.

    EPA Science Inventory

    Microelectrode arrays (MEAs) detect drug and chemical induced changes in neuronal network function and have been used for neurotoxicity screening. As a proof-•of-concept, the current study assessed the utility of analytical "fingerprinting" using Principal Components Analysis (P...

  19. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  20. Tolerance and Nature of Residual Refraction in Symmetric Power Space as Principal Lens Powers and Meridians Change

    PubMed Central

    2014-01-01

    Unacceptable principal powers in well-centred lenses may require a toric over-refraction which differs in nature from the one where correct powers have misplaced meridians. This paper calculates residual (over) refractions and their natures. The magnitude of the power of the over-refraction serves as a general, reliable, real scalar criterion for acceptance or tolerance of lenses whose surface relative curvatures change or whose meridians are rotated and cause powers to differ. Principal powers and meridians of lenses are analogous to eigenvalues and eigenvectors of symmetric matrices, which facilitates the calculation of powers and their residuals. Geometric paths in symmetric power space link intended refractive correction and these carefully chosen, undue refractive corrections. Principal meridians alone vary along an arc of a circle centred at the origin and corresponding powers vary autonomously along select diameters of that circle in symmetric power space. Depending on the path of the power change, residual lenses different from their prescription in principal powers and meridians are pure cross-cylindrical or spherocylindrical in nature. The location of residual power in symmetric dioptric power space and its optical cross-representation characterize the lens that must be added to the compensation to attain the power in the prescription. PMID:25478004

  1. Tolerance and nature of residual refraction in symmetric power space as principal lens powers and meridians change.

    PubMed

    Abelman, Herven; Abelman, Shirley

    2014-01-01

    Unacceptable principal powers in well-centred lenses may require a toric over-refraction which differs in nature from the one where correct powers have misplaced meridians. This paper calculates residual (over) refractions and their natures. The magnitude of the power of the over-refraction serves as a general, reliable, real scalar criterion for acceptance or tolerance of lenses whose surface relative curvatures change or whose meridians are rotated and cause powers to differ. Principal powers and meridians of lenses are analogous to eigenvalues and eigenvectors of symmetric matrices, which facilitates the calculation of powers and their residuals. Geometric paths in symmetric power space link intended refractive correction and these carefully chosen, undue refractive corrections. Principal meridians alone vary along an arc of a circle centred at the origin and corresponding powers vary autonomously along select diameters of that circle in symmetric power space. Depending on the path of the power change, residual lenses different from their prescription in principal powers and meridians are pure cross-cylindrical or spherocylindrical in nature. The location of residual power in symmetric dioptric power space and its optical cross-representation characterize the lens that must be added to the compensation to attain the power in the prescription.

  2. Terahertz-dependent identification of simulated hole shapes in oil-gas reservoirs

    NASA Astrophysics Data System (ADS)

    Bao, Ri-Ma; Zhan, Hong-Lei; Miao, Xin-Yang; Zhao, Kun; Feng, Cheng-Jing; Dong, Chen; Li, Yi-Zhang; Xiao, Li-Zhi

    2016-10-01

    Detecting holes in oil-gas reservoirs is vital to the evaluation of reservoir potential. The main objective of this study is to demonstrate the feasibility of identifying general micro-hole shapes, including triangular, circular, and square shapes, in oil-gas reservoirs by adopting terahertz time-domain spectroscopy (THz-TDS). We evaluate the THz absorption responses of punched silicon (Si) wafers having micro-holes with sizes of 20 μm-500 μm. Principal component analysis (PCA) is used to establish a model between THz absorbance and hole shapes. The positions of samples in three-dimensional spaces for three principal components are used to determine the differences among diverse hole shapes and the homogeneity of similar shapes. In addition, a new Si wafer with the unknown hole shapes, including triangular, circular, and square, can be qualitatively identified by combining THz-TDS and PCA. Therefore, the combination of THz-TDS with mathematical statistical methods can serve as an effective approach to the rapid identification of micro-hole shapes in oil-gas reservoirs. Project supported by the National Natural Science Foundation of China (Grant No. 61405259), the National Basic Research Program of China (Grant No. 2014CB744302), and the Specially Founded Program on National Key Scientific Instruments and Equipment Development, China (Grant No. 2012YQ140005).

  3. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition.

    PubMed

    Caggiano, Alessandra

    2018-03-09

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.

  4. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition

    PubMed Central

    2018-01-01

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443

  5. Detection and Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of GPS Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2014-12-01

    A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle (interseismic + coseismic + postseismic + seasonal + noise), and study the ability of the algorithm to recover the original (known) sources of deformation. Secondly, we apply vbICA to different tectonically active scenarios, such as earthquakes in central and northern Italy, as well as the study of slow slip events in Cascadia.

  6. Fusion Schemes for Ensembles of Hyperspectral Anomaly Detection Algorithms

    DTIC Science & Technology

    2011-03-01

    radiance to reflectance or vice versa is  complicated  and requires some  knowledge of the atmospheric conditions and viewing geometry at the time of...each component.  The data is projected into this new principal  component space where it is  whitened .  The number of dimensions to be retained is...controlled directly by the user, the threshold setting.  However, one of the  complications  of this method often is calculating confidence intervals

  7. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  8. Geometric morphometric evaluation of cervical vertebrae shape and its relationship to skeletal maturation.

    PubMed

    Chatzigianni, Athina; Halazonetis, Demetrios J

    2009-10-01

    Cervical vertebrae shape has been proposed as a diagnostic factor for assessing skeletal maturation in orthodontic patients. However, evaluation of vertebral shape is mainly based on qualitative criteria. Comprehensive quantitative measurements of shape and assessments of its predictive power have not been reported. Our aims were to measure vertebral shape by using the tools of geometric morphometrics and to evaluate the correlation and predictive power of vertebral shape on skeletal maturation. Pretreatment lateral cephalograms and corresponding hand-wrist radiographs of 98 patients (40 boys, 58 girls; ages, 8.1-17.7 years) were used. Skeletal age was estimated from the hand-wrist radiographs. The first 4 vertebrae were traced, and 187 landmarks (34 fixed and 153 sliding semilandmarks) were used. Sliding semilandmarks were adjusted to minimize bending energy against the average of the sample. Principal components analysis in shape and form spaces was used for evaluating shape patterns. Shape measures, alone and combined with centroid size and age, were assessed as predictors of skeletal maturation. Shape alone could not predict skeletal maturation better than chronologic age. The best prediction was achieved with the combination of form space principal components and age, giving 90% prediction intervals of approximately 200 maturation units in the girls and 300 units in the boys. Similar predictive power could be obtained by using centroid size and age. Vertebrae C2, C3, and C4 gave similar results when examined individually or combined. C1 showed lower correlations, signifying lower integration with hand-wrist maturation. Vertebral shape is strongly correlated to skeletal age but does not offer better predictive value than chronologic age.

  9. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  10. A principal components model of soundscape perception.

    PubMed

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  11. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  12. Coordinating space telescope operations in an integrated planning and scheduling architecture

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.; Cesta, Amedeo; D'Aloisi, Daniela

    1992-01-01

    The Heuristic Scheduling Testbed System (HSTS), a software architecture for integrated planning and scheduling, is discussed. The architecture has been applied to the problem of generating observation schedules for the Hubble Space Telescope. This problem is representative of the class of problems that can be addressed: their complexity lies in the interaction of resource allocation and auxiliary task expansion. The architecture deals with this interaction by viewing planning and scheduling as two complementary aspects of the more general process of constructing behaviors of a dynamical system. The principal components of the software architecture are described, indicating how to model the structure and dynamics of a system, how to represent schedules at multiple levels of abstraction in the temporal database, and how the problem solving machinery operates. A scheduler for the detailed management of Hubble Space Telescope operations that has been developed within HSTS is described. Experimental performance results are given that indicate the utility and practicality of the approach.

  13. Spatial and temporal characterizations of water quality in Kuwait Bay.

    PubMed

    Al-Mutairi, N; Abahussain, A; El-Battay, A

    2014-06-15

    The spatial and temporal patterns of water quality in Kuwait Bay have been investigated using data from six stations between 2009 and 2011. The results showed that most of water quality parameters such as phosphorus (PO4), nitrate (NO3), dissolved oxygen (DO), and Total Suspended Solids (TSS) fluctuated over time and space. Based on Water Quality Index (WQI) data, six stations were significantly clustered into two main classes using cluster analysis, one group located in western side of the Bay, and other in eastern side. Three principal components are responsible for water quality variations in the Bay. The first component included DO and pH. The second included PO4, TSS and NO3, and the last component contained seawater temperature and turbidity. The spatial and temporal patterns of water quality in Kuwait Bay are mainly controlled by seasonal variations and discharges from point sources of pollution along Kuwait Bay's coast as well as from Shatt Al-Arab River. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Application of principal component analysis in protein unfolding: an all-atom molecular dynamics simulation study.

    PubMed

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-28

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide-ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  15. Application of principal component analysis in protein unfolding: An all-atom molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Das, Atanu; Mukhopadhyay, Chaitali

    2007-10-01

    We have performed molecular dynamics (MD) simulation of the thermal denaturation of one protein and one peptide—ubiquitin and melittin. To identify the correlation in dynamics among various secondary structural fragments and also the individual contribution of different residues towards thermal unfolding, principal component analysis method was applied in order to give a new insight to protein dynamics by analyzing the contribution of coefficients of principal components. The cross-correlation matrix obtained from MD simulation trajectory provided important information regarding the anisotropy of backbone dynamics that leads to unfolding. Unfolding of ubiquitin was found to be a three-state process, while that of melittin, though smaller and mostly helical, is more complicated.

  16. SAS program for quantitative stratigraphic correlation by principal components

    USGS Publications Warehouse

    Hohn, M.E.

    1985-01-01

    A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.

  17. Implementation of an integrating sphere for the enhancement of noninvasive glucose detection using quantum cascade laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.

    2018-05-01

    An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.

  18. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    PubMed

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  19. Euclidean chemical spaces from molecular fingerprints: Hamming distance and Hempel's ravens.

    PubMed

    Martin, Eric; Cao, Eddie

    2015-05-01

    Molecules are often characterized by sparse binary fingerprints, where 1s represent the presence of substructures and 0s represent their absence. Fingerprints are especially useful for similarity calculations, such as database searching or clustering, generally measuring similarity as the Tanimoto coefficient. In other cases, such as visualization, design of experiments, or latent variable regression, a low-dimensional Euclidian "chemical space" is more useful, where proximity between points reflects chemical similarity. A temptation is to apply principal components analysis (PCA) directly to these fingerprints to obtain a low dimensional continuous chemical space. However, Gower has shown that distances from PCA on bit vectors are proportional to the square root of Hamming distance. Unlike Tanimoto similarity, Hamming similarity (HS) gives equal weight to shared 0s as to shared 1s, that is, HS gives as much weight to substructures that neither molecule contains, as to substructures which both molecules contain. Illustrative examples show that proximity in the corresponding chemical space reflects mainly similar size and complexity rather than shared chemical substructures. These spaces are ill-suited for visualizing and optimizing coverage of chemical space, or as latent variables for regression. A more suitable alternative is shown to be Multi-dimensional scaling on the Tanimoto distance matrix, which produces a space where proximity does reflect structural similarity.

  20. Principals' Perceptions of Collegial Support as a Component of Administrative Inservice.

    ERIC Educational Resources Information Center

    Daresh, John C.

    To address the problem of increasing professional isolation of building administrators, the Principals' Inservice Project helps establish principals' collegial support groups across the nation. The groups are typically composed of 6 to 10 principals who meet at least once each month over a 2-year period. One collegial support group of seven…

  1. Training the Trainers: Learning to Be a Principal Supervisor

    ERIC Educational Resources Information Center

    Saltzman, Amy

    2017-01-01

    While most principal supervisors are former principals themselves, few come to the role with specific training in how to do the job effectively. For this reason, both the Washington, D.C., and Tulsa, Oklahoma, principal supervisor programs include a strong professional development component. In this article, the author takes a look inside these…

  2. The Great Observatories All-Sky LIRG Survey: Herschel Image Atlas and Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Chu, Jason K.; Sanders, D. B.; Larson, K. L.; Mazzarella, J. M.; Howell, J. H.; Díaz-Santos, T.; Xu, K. C.; Paladini, R.; Schulz, B.; Shupe, D.; Appleton, P.; Armus, L.; Billot, N.; Chan, B. H. P.; Evans, A. S.; Fadda, D.; Frayer, D. T.; Haan, S.; Ishida, C. M.; Iwasawa, K.; Kim, D.-C.; Lord, S.; Murphy, E.; Petric, A.; Privon, G. C.; Surace, J. A.; Treister, E.

    2017-04-01

    Far-infrared images and photometry are presented for 201 Luminous and Ultraluminous Infrared Galaxies [LIRGs: log ({L}{IR}/{L}⊙ )=11.00{--}11.99, ULIRGs: log ({L}{IR}/{L}⊙ )=12.00{--}12.99], in the Great Observatories All-Sky LIRG Survey (GOALS), based on observations with the Herschel Space Observatory Photodetector Array Camera and Spectrometer (PACS) and the Spectral and Photometric Imaging Receiver (SPIRE) instruments. The image atlas displays each GOALS target in the three PACS bands (70, 100, and 160 μm) and the three SPIRE bands (250, 350, and 500 μm), optimized to reveal structures at both high and low surface brightness levels, with images scaled to simplify comparison of structures in the same physical areas of ˜100 × 100 kpc2. Flux densities of companion galaxies in merging systems are provided where possible, depending on their angular separation and the spatial resolution in each passband, along with integrated system fluxes (sum of components). This data set constitutes the imaging and photometric component of the GOALS Herschel OT1 observing program, and is complementary to atlases presented for the Hubble Space Telescope, Spitzer Space Telescope, and Chandra X-ray Observatory. Collectively, these data will enable a wide range of detailed studies of active galactic nucleus and starburst activity within the most luminous infrared galaxies in the local universe. Based on Herschel Space Observatory observations. Herschel is an ESA space observatory with science instruments provided by the European-led Principal Investigator consortia, and important participation from NASA.

  3. Use of Geochemistry Data Collected by the Mars Exploration Rover Spirit in Gusev Crater to Teach Geomorphic Zonation through Principal Components Analysis

    ERIC Educational Resources Information Center

    Rodrigue, Christine M.

    2011-01-01

    This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…

  4. A Principal Components Analysis and Validation of the Coping with the College Environment Scale (CWCES)

    ERIC Educational Resources Information Center

    Ackermann, Margot Elise; Morrow, Jennifer Ann

    2008-01-01

    The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…

  5. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  6. Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.

    2013-06-01

    Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.

  7. Stability of Nonlinear Principal Components Analysis: An Empirical Study Using the Balanced Bootstrap

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Kooij, Anita J.

    2007-01-01

    Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate…

  8. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  9. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  10. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  11. 40 CFR 60.1580 - What are the principal components of the model rule?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the model rule? 60.1580 Section 60.1580 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines..., 1999 Use of Model Rule § 60.1580 What are the principal components of the model rule? The model rule...

  12. 40 CFR 60.2998 - What are the principal components of the model rule?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the model rule? 60.2998 Section 60.2998 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines... December 9, 2004 Model Rule-Use of Model Rule § 60.2998 What are the principal components of the model rule...

  13. Students' Perceptions of Teaching and Learning Practices: A Principal Component Approach

    ERIC Educational Resources Information Center

    Mukorera, Sophia; Nyatanga, Phocenah

    2017-01-01

    Students' attendance and engagement with teaching and learning practices is perceived as a critical element for academic performance. Even with stipulated attendance policies, students still choose not to engage. The study employed a principal component analysis to analyze first- and second-year students' perceptions of the importance of the 12…

  14. Principal Perspectives about Policy Components and Practices for Reducing Cyberbullying in Urban Schools

    ERIC Educational Resources Information Center

    Hunley-Jenkins, Keisha Janine

    2012-01-01

    This qualitative study explores large, urban, mid-western principal perspectives about cyberbullying and the policy components and practices that they have found effective and ineffective at reducing its occurrence and/or negative effect on their schools' learning environments. More specifically, the researcher was interested in learning more…

  15. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    ERIC Educational Resources Information Center

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  16. Learning Principal Component Analysis by Using Data from Air Quality Networks

    ERIC Educational Resources Information Center

    Perez-Arribas, Luis Vicente; Leon-González, María Eugenia; Rosales-Conrado, Noelia

    2017-01-01

    With the final objective of using computational and chemometrics tools in the chemistry studies, this paper shows the methodology and interpretation of the Principal Component Analysis (PCA) using pollution data from different cities. This paper describes how students can obtain data on air quality and process such data for additional information…

  17. Applications of Nonlinear Principal Components Analysis to Behavioral Data.

    ERIC Educational Resources Information Center

    Hicks, Marilyn Maginley

    1981-01-01

    An empirical investigation of the statistical procedure entitled nonlinear principal components analysis was conducted on a known equation and on measurement data in order to demonstrate the procedure and examine its potential usefulness. This method was suggested by R. Gnanadesikan and based on an early paper of Karl Pearson. (Author/AL)

  18. Relationships between Association of Research Libraries (ARL) Statistics and Bibliometric Indicators: A Principal Components Analysis

    ERIC Educational Resources Information Center

    Hendrix, Dean

    2010-01-01

    This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…

  19. Principal component analysis for protein folding dynamics.

    PubMed

    Maisuradze, Gia G; Liwo, Adam; Scheraga, Harold A

    2009-01-09

    Protein folding is considered here by studying the dynamics of the folding of the triple beta-strand WW domain from the Formin-binding protein 28. Starting from the unfolded state and ending either in the native or nonnative conformational states, trajectories are generated with the coarse-grained united residue (UNRES) force field. The effectiveness of principal components analysis (PCA), an already established mathematical technique for finding global, correlated motions in atomic simulations of proteins, is evaluated here for coarse-grained trajectories. The problems related to PCA and their solutions are discussed. The folding and nonfolding of proteins are examined with free-energy landscapes. Detailed analyses of many folding and nonfolding trajectories at different temperatures show that PCA is very efficient for characterizing the general folding and nonfolding features of proteins. It is shown that the first principal component captures and describes in detail the dynamics of a system. Anomalous diffusion in the folding/nonfolding dynamics is examined by the mean-square displacement (MSD) and the fractional diffusion and fractional kinetic equations. The collisionless (or ballistic) behavior of a polypeptide undergoing Brownian motion along the first few principal components is accounted for.

  20. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters.

    PubMed

    Tao, Dapeng; Lin, Xu; Jin, Lianwen; Li, Xuelong

    2016-03-01

    Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.

  1. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  2. [Applications of three-dimensional fluorescence spectrum of dissolved organic matter to identification of red tide algae].

    PubMed

    Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao

    2011-01-01

    The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.

  3. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  4. Hyperspectral optical imaging of human iris in vivo: characteristics of reflectance spectra

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Pereira, Luís M.; Correia, Hélder T.; Nascimento, Sérgio M. C.

    2011-07-01

    We report a hyperspectral imaging system to measure the reflectance spectra of real human irises with high spatial resolution. A set of ocular prosthesis was used as the control condition. Reflectance data were decorrelated by the principal-component analysis. The main conclusion is that spectral complexity of the human iris is considerable: between 9 and 11 principal components are necessary to account for 99% of the cumulative variance in human irises. Correcting image misalignments associated with spontaneous ocular movements did not influence this result. The data also suggests a correlation between the first principal component and different levels of melanin present in the irises. It was also found that although the spectral characteristics of the first five principal components were not affected by the radial and angular position of the selected iridal areas, they affect the higher-order ones, suggesting a possible influence of the iris texture. The results show that hyperspectral imaging in the iris, together with adequate spectroscopic analyses provide more information than conventional colorimetric methods, making hyperspectral imaging suitable for the characterization of melanin and the noninvasive diagnosis of ocular diseases and iris color.

  5. Seeing wholes: The concept of systems thinking and its implementation in school leadership

    NASA Astrophysics Data System (ADS)

    Shaked, Haim; Schechter, Chen

    2013-12-01

    Systems thinking (ST) is an approach advocating thinking about any given issue as a whole, emphasising the interrelationships between its components rather than the components themselves. This article aims to link ST and school leadership, claiming that ST may enable school principals to develop highly performing schools that can cope successfully with current challenges, which are more complex than ever before in today's era of accountability and high expectations. The article presents the concept of ST - its definition, components, history and applications. Thereafter, its connection to education and its contribution to school management are described. The article concludes by discussing practical processes including screening for ST-skilled principal candidates and developing ST skills among prospective and currently performing school principals, pinpointing three opportunities for skills acquisition: during preparatory programmes; during their first years on the job, supported by veteran school principals as mentors; and throughout their entire career. Such opportunities may not only provide school principals with ST skills but also improve their functioning throughout the aforementioned stages of professional development.

  6. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  7. Temporal evolution of financial-market correlations.

    PubMed

    Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  8. Temporal evolution of financial-market correlations

    NASA Astrophysics Data System (ADS)

    Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  9. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H. W.; Kurth, R. E.

    1991-01-01

    The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.

  10. Baryonic effects in cosmic shear tomography: PCA parametrization and importance of extreme baryonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Irshad; Gnedin, Nickolay Y.

    Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less

  11. Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP

    NASA Astrophysics Data System (ADS)

    Russo, A.; Trigo, R. M.

    2003-04-01

    A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}

  12. Evaluating filterability of different types of sludge by statistical analysis: The role of key organic compounds in extracellular polymeric substances.

    PubMed

    Xiao, Keke; Chen, Yun; Jiang, Xie; Zhou, Yan

    2017-03-01

    An investigation was conducted for 20 different types of sludge in order to identify the key organic compounds in extracellular polymeric substances (EPS) that are important in assessing variations of sludge filterability. The different types of sludge varied in initial total solids (TS) content, organic composition and pre-treatment methods. For instance, some of the sludges were pre-treated by acid, ultrasonic, thermal, alkaline, or advanced oxidation technique. The Pearson's correlation results showed significant correlations between sludge filterability and zeta potential, pH, dissolved organic carbon, protein and polysaccharide in soluble EPS (SB EPS), loosely bound EPS (LB EPS) and tightly bound EPS (TB EPS). The principal component analysis (PCA) method was used to further explore correlations between variables and similarities among EPS fractions of different types of sludge. Two principal components were extracted: principal component 1 accounted for 59.24% of total EPS variations, while principal component 2 accounted for 25.46% of total EPS variations. Dissolved organic carbon, protein and polysaccharide in LB EPS showed higher eigenvector projection values than the corresponding compounds in SB EPS and TB EPS in principal component 1. Further characterization of fractionized key organic compounds in LB EPS was conducted with size-exclusion chromatography-organic carbon detection-organic nitrogen detection (LC-OCD-OND). A numerical multiple linear regression model was established to describe relationship between organic compounds in LB EPS and sludge filterability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. QSAR modeling of flotation collectors using principal components extracted from topological indices.

    PubMed

    Natarajan, R; Nirdosh, Inderjit; Basak, Subhash C; Mills, Denise R

    2002-01-01

    Several topological indices were calculated for substituted-cupferrons that were tested as collectors for the froth flotation of uranium. The principal component analysis (PCA) was used for data reduction. Seven principal components (PC) were found to account for 98.6% of the variance among the computed indices. The principal components thus extracted were used in stepwise regression analyses to construct regression models for the prediction of separation efficiencies (Es) of the collectors. A two-parameter model with a correlation coefficient of 0.889 and a three-parameter model with a correlation coefficient of 0.913 were formed. PCs were found to be better than partition coefficient to form regression equations, and inclusion of an electronic parameter such as Hammett sigma or quantum mechanically derived electronic charges on the chelating atoms did not improve the correlation coefficient significantly. The method was extended to model the separation efficiencies of mercaptobenzothiazoles (MBT) and aminothiophenols (ATP) used in the flotation of lead and zinc ores, respectively. Five principal components were found to explain 99% of the data variability in each series. A three-parameter equation with correlation coefficient of 0.985 and a two-parameter equation with correlation coefficient of 0.926 were obtained for MBT and ATP, respectively. The amenability of separation efficiencies of chelating collectors to QSAR modeling using PCs based on topological indices might lead to the selection of collectors for synthesis and testing from a virtual database.

  14. Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM

    NASA Technical Reports Server (NTRS)

    Crane, Robert G.; Hewitson, Bruce

    1990-01-01

    Model simulations of global climate change are seen as an essential component of any program aimed at understanding human impact on the global environment. A major weakness of current general circulation models (GCMs), however, is their inability to predict reliably the regional consequences of a global scale change, and it is these regional scale predictions that are necessary for studies of human/environmental response. This research is directed toward the development of a methodology for the validation of the synoptic scale climatology of GCMs. This is developed with regard to the Goddard Institute for Space Studies (GISS) GCM Model 2, with the specific objective of using the synoptic circulation form a doubles CO2 simulation to estimate regional climate change over North America, south of Hudson Bay. This progress report is specifically concerned with validating the synoptic climatology of the GISS GCM, and developing the transfer function to derive grid-point temperatures from the synoptic circulation. Principal Components Analysis is used to characterize the primary modes of the spatial and temporal variability in the observed and simulated climate, and the model validation is based on correlations between component loadings, and power spectral analysis of the component scores. The results show that the high resolution GISS model does an excellent job of simulating the synoptic circulation over the U.S., and that grid-point temperatures can be predicted with reasonable accuracy from the circulation patterns.

  15. 14 CFR 221.101 - Inspection at stations, offices, or locations other than principal or general office.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Inspection at stations, offices, or locations other than principal or general office. 221.101 Section 221.101 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Availability of Tariff Publications for Public...

  16. Pattern Analysis of Dynamic Susceptibility Contrast-enhanced MR Imaging Demonstrates Peritumoral Tissue Heterogeneity

    PubMed Central

    Akbari, Hamed; Macyszyn, Luke; Da, Xiao; Wolf, Ronald L.; Bilello, Michel; Verma, Ragini; O’Rourke, Donald M.

    2014-01-01

    Purpose To augment the analysis of dynamic susceptibility contrast material–enhanced magnetic resonance (MR) images to uncover unique tissue characteristics that could potentially facilitate treatment planning through a better understanding of the peritumoral region in patients with glioblastoma. Materials and Methods Institutional review board approval was obtained for this study, with waiver of informed consent for retrospective review of medical records. Dynamic susceptibility contrast-enhanced MR imaging data were obtained for 79 patients, and principal component analysis was applied to the perfusion signal intensity. The first six principal components were sufficient to characterize more than 99% of variance in the temporal dynamics of blood perfusion in all regions of interest. The principal components were subsequently used in conjunction with a support vector machine classifier to create a map of heterogeneity within the peritumoral region, and the variance of this map served as the heterogeneity score. Results The calculated principal components allowed near-perfect separability of tissue that was likely highly infiltrated with tumor and tissue that was unlikely infiltrated with tumor. The heterogeneity map created by using the principal components showed a clear relationship between voxels judged by the support vector machine to be highly infiltrated and subsequent recurrence. The results demonstrated a significant correlation (r = 0.46, P < .0001) between the heterogeneity score and patient survival. The hazard ratio was 2.23 (95% confidence interval: 1.4, 3.6; P < .01) between patients with high and low heterogeneity scores on the basis of the median heterogeneity score. Conclusion Analysis of dynamic susceptibility contrast-enhanced MR imaging data by using principal component analysis can help identify imaging variables that can be subsequently used to evaluate the peritumoral region in glioblastoma. These variables are potentially indicative of tumor infiltration and may become useful tools in guiding therapy, as well as individualized prognostication. © RSNA, 2014 PMID:24955928

  17. Signal-to-noise contribution of principal component loads in reconstructed near-infrared Raman tissue spectra.

    PubMed

    Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R

    2010-01-01

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.

  18. Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.

    PubMed

    Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang

    2018-01-01

    This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.

  19. KSC-2012-4550

    NASA Image and Video Library

    2012-08-20

    CAPE CANAVERAL, Fla. - During a mission science briefing for the Radiation Belt Storm Probes, or RBSP, mission at NASA Kennedy Space Center’s Press Site in Florida, Craig Kletzing, a principal investigator from the University of Iowa, answers questions and displays a scale model of the twin probes. To the left, is Harlan Spence, principal investigator with the University of New Hampshire. To the right, is Lou Lanzerotti, principal investigator with the New Jersey Institute of Technology. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 24. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Glenn Benson

  20. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  1. Cultivating an Environment that Contributes to Teaching and Learning in Schools: High School Principals' Actions

    ERIC Educational Resources Information Center

    Lin, Mind-Dih

    2012-01-01

    Improving principal leadership is a vital component to the success of educational reform initiatives that seek to improve whole-school performance, as principal leadership often exercises positive but indirect effects on student learning. Because of the importance of principals within the field of school improvement, this article focuses on…

  2. Measuring Principals' Effectiveness: Results from New Jersey's First Year of Statewide Principal Evaluation. REL 2016-156

    ERIC Educational Resources Information Center

    Herrmann, Mariesa; Ross, Christine

    2016-01-01

    States and districts across the country are implementing new principal evaluation systems that include measures of the quality of principals' school leadership practices and measures of student achievement growth. Because these evaluation systems will be used for high-stakes decisions, it is important that the component measures of the evaluation…

  3. The Views of Novice and Late Career Principals Concerning Instructional and Organizational Leadership within Their Evaluation

    ERIC Educational Resources Information Center

    Hvidston, David J.; Range, Bret G.; McKim, Courtney Ann; Mette, Ian M.

    2015-01-01

    This study examined the perspectives of novice and late career principals concerning instructional and organizational leadership within their performance evaluations. An online survey was sent to 251 principals with a return rate of 49%. Instructional leadership components of the evaluation that were most important to all principals were:…

  4. KSC-2015-1007

    NASA Image and Video Library

    2015-01-05

    CAPE CANAVERAL, Fla. -- In the Kennedy Space Center’s Press Site auditorium, agency and industry leaders spoke to members of the news media on International Space Station research and technology developments. From left are: Mike Curie of NASA Public Affairs, Julie Robinson, ISS Program chief scientist at NASA’s Johnson Space Center, Kenneth Shields, director of operations and education for the Center for the Advancement of Science in Space, Cheryl Nickerson of Arizona State University, and principal investigator for the Micro-5 experiment, and Samuel Durrance of the Florida Institute of Technology, principal investigator for the NR-SABOL experiment. Photo credit: NASA/ Kim Shiflett

  5. Checking Dimensionality in Item Response Models with Principal Component Analysis on Standardized Residuals

    ERIC Educational Resources Information Center

    Chou, Yeh-Tai; Wang, Wen-Chung

    2010-01-01

    Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…

  6. Variable Neighborhood Search Heuristics for Selecting a Subset of Variables in Principal Component Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Singh, Renu; Steinley, Douglas

    2009-01-01

    The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…

  7. Relaxation mode analysis of a peptide system: comparison with principal component analysis.

    PubMed

    Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi

    2011-10-28

    This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.

  8. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  9. Multivariate analyses of salt stress and metabolite sensing in auto- and heterotroph Chenopodium cell suspensions.

    PubMed

    Wongchai, C; Chaidee, A; Pfeiffer, W

    2012-01-01

    Global warming increases plant salt stress via evaporation after irrigation, but how plant cells sense salt stress remains unknown. Here, we searched for correlation-based targets of salt stress sensing in Chenopodium rubrum cell suspension cultures. We proposed a linkage between the sensing of salt stress and the sensing of distinct metabolites. Consequently, we analysed various extracellular pH signals in autotroph and heterotroph cell suspensions. Our search included signals after 52 treatments: salt and osmotic stress, ion channel inhibitors (amiloride, quinidine), salt-sensing modulators (proline), amino acids, carboxylic acids and regulators (salicylic acid, 2,4-dichlorphenoxyacetic acid). Multivariate analyses revealed hirarchical clusters of signals and five principal components of extracellular proton flux. The principal component correlated with salt stress was an antagonism of γ-aminobutyric and salicylic acid, confirming involvement of acid-sensing ion channels (ASICs) in salt stress sensing. Proline, short non-substituted mono-carboxylic acids (C2-C6), lactic acid and amiloride characterised the four uncorrelated principal components of proton flux. The proline-associated principal component included an antagonism of 2,4-dichlorphenoxyacetic acid and a set of amino acids (hydrophobic, polar, acidic, basic). The five principal components captured 100% of variance of extracellular proton flux. Thus, a bias-free, functional high-throughput screening was established to extract new clusters of response elements and potential signalling pathways, and to serve as a core for quantitative meta-analysis in plant biology. The eigenvectors reorient research, associating proline with development instead of salt stress, and the proof of existence of multiple components of proton flux can help to resolve controversy about the acid growth theory. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  10. Visual Exploration of Semantic Relationships in Neural Word Embeddings

    DOE PAGES

    Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.; ...

    2017-08-29

    Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less

  11. Visual Exploration of Semantic Relationships in Neural Word Embeddings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.

    Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less

  12. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  13. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  14. Generative Topographic Mapping (GTM): Universal Tool for Data Visualization, Structure-Activity Modeling and Dataset Comparison.

    PubMed

    Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A

    2012-04-01

    Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Hunting for Active Galactic Nuclei in JWST/MIRI Imaging

    NASA Astrophysics Data System (ADS)

    Lin, Kenneth W.; Pope, Alexandra; Kirkpatrick, Allison

    2018-01-01

    The mid-infrared is uniquely sensitive to both star formation and active galactic nuclei (AGN) activity in galaxies. While spectra in this range can unambiguously identify these two processes, imaging data from the Spitzer Space Telescope found that the mid-infrared colors are also able to separate AGN from star forming galaxies. With the launch of the James Webb Space Telescope, our access to mid-infrared will be renewed; specifically, MIRI will provide imaging in 9 bands from 5.6-25.5 microns. While predictions show that color diagnostics will be useful with JWST/MIRI, this does not exploit the full dataset of MIRI imaging. In this poster, we discuss a Principal Component Analysis to identify the JWST filters that are most sensitive to the AGN contribution and demonstrate how to use it to identify large samples of AGN from planned MIRI imaging surveys.

  16. [The application of the multidimensional statistical methods in the evaluation of the influence of atmospheric pollution on the population's health].

    PubMed

    Surzhikov, V D; Surzhikov, D V

    2014-01-01

    The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.

  17. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  18. Comparison of dimensionality reduction methods to predict genomic breeding values for carcass traits in pigs.

    PubMed

    Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S

    2015-10-09

    A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.

  19. Magnetic hyperbolic optical metamaterials

    DOE PAGES

    Kruk, Sergey S.; Wong, Zi Jing; Pshenay-Severin, Ekaterina; ...

    2016-04-13

    Strongly anisotropic media where the principal components of electric permittivity or magnetic permeability tensors have opposite signs are termed as hyperbolic media. Such media support propagating electromagnetic waves with extremely large wave vectors exhibiting unique optical properties. However, in all artificial and natural optical materials studied to date, the hyperbolic dispersion originates solely from the electric response. This then restricts material functionality to one polarization of light and inhibits free-space impedance matching. Such restrictions can be overcome in media having components of opposite signs for both electric and magnetic tensors. Here we present the experimental demonstration of the magnetic hyperbolicmore » dispersion in three-dimensional metamaterials. We also measure metamaterial isofrequency contours and reveal the topological phase transition between the elliptic and hyperbolic dispersion. In the hyperbolic regime, we demonstrate the strong enhancement of thermal emission, which becomes directional, coherent and polarized. These findings show the possibilities for realizing efficient impedance-matched hyperbolic media for unpolarized light.« less

  20. Reducing the Read Noise of the James Webb Space Telescope Near Infrared Spectrograph Detector Subsystem

    NASA Technical Reports Server (NTRS)

    Rauscher, Bernard; Arendt, Richard G.; Fixsen, D. J.; Lindler, Don; Loose, Markus; Moseley, S. H.; Wilson, D. V.

    2012-01-01

    We describe a Wiener optimal approach to using the reference output and reference pixels that are built into Teledyne's HAWAII-2RG detector arrays. In this way, we are reducing the total noise per approximately 1000 second 88 frame up-the-ramp dark integration from about 6.5 e- rms to roughly 5 e- rms. Using a principal components analysis formalism, we achieved these noise improvements without altering the hardware in any way. In addition to being lower, the noise is also cleaner with much less visible correlation. For example, the faint horizontal banding that is often seen in HAWAII-2RG images is almost completely removed. Preliminary testing suggests that the relative gains are even higher when using non flight grade components. We believe that these techniques are applicable to most HAWAII-2RG based instruments.

  1. RSM 1.0 user's guide: A resupply scheduler using integer optimization

    NASA Technical Reports Server (NTRS)

    Viterna, Larry A.; Green, Robert D.; Reed, David M.

    1991-01-01

    The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.

  2. Performance-Based Preparation of Principals: A Framework for Improvement. A Special Report of the NASSP Consortium for the Performance-Based Preparation of Principals.

    ERIC Educational Resources Information Center

    National Association of Secondary School Principals, Reston, VA.

    Preparation programs for principals should have excellent academic and performance based components. In examining the nature of performance based principal preparation this report finds that school administration programs must bridge the gap between conceptual learning in the classroom and the requirements of professional practice. A number of…

  3. Principal component greenness transformation in multitemporal agricultural Landsat data

    NASA Technical Reports Server (NTRS)

    Abotteen, R. A.

    1978-01-01

    A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.

  4. VANDENBERG AFB, CALIF. - In the NASA spacecraft processing facility on North Vandenberg Air Force Base, Dr. Francis Everitt, principal investigator, and Brad Parkinson, co-principal investigator, both from Stanford University, hold one of the small gyroscopes used in the Gravity Probe B spacecraft. The GP-B towers behind them. The Gravity Probe B mission is a relativity experiment developed by NASA’s Marshall Space Flight Center, Stanford University and Lockheed Martin. The spacecraft will test two extraordinary predictions of Albert Einstein’s general theory of relativity that he advanced in 1916: the geodetic effect (how space and time are warped by the presence of the Earth) and frame dragging (how Earth’s rotation drags space and time around with it). Gravity Probe B consists of four sophisticated gyroscopes that will provide an almost perfect space-time reference system. The mission will look in a precision manner for tiny changes in the direction of spin.

    NASA Image and Video Library

    2003-11-10

    VANDENBERG AFB, CALIF. - In the NASA spacecraft processing facility on North Vandenberg Air Force Base, Dr. Francis Everitt, principal investigator, and Brad Parkinson, co-principal investigator, both from Stanford University, hold one of the small gyroscopes used in the Gravity Probe B spacecraft. The GP-B towers behind them. The Gravity Probe B mission is a relativity experiment developed by NASA’s Marshall Space Flight Center, Stanford University and Lockheed Martin. The spacecraft will test two extraordinary predictions of Albert Einstein’s general theory of relativity that he advanced in 1916: the geodetic effect (how space and time are warped by the presence of the Earth) and frame dragging (how Earth’s rotation drags space and time around with it). Gravity Probe B consists of four sophisticated gyroscopes that will provide an almost perfect space-time reference system. The mission will look in a precision manner for tiny changes in the direction of spin.

  5. Affordable Earth Observatories for Developing Countries

    NASA Astrophysics Data System (ADS)

    Meurer, R. H.

    Traditionally high cost has been the principal impediment to developing nations desiring to pursue space programs. More particularly, the benefits derivable from a space system have been less than adequate to justify the investment required. Chief among the causes has been the inability of the system to produce results with sufficient direct economic value to the peoples of their countries. Over the past 15 years, however, "the Microspace Revolution" has resulted in dramatic reductions in the cost of space systems, while at the same time technology has improved to provide greater capabilities in the smallest micro- and nano-class1 satellites. Because of these advances, it behooves developing nations to reevaluate space as an option for their national development. This paper summarizes two new micro-satellite concepts - NanoObservatoryTM and MicroObservatoryTM that offer the prom- ise of a dedicated Earth remote sensing capability at costs comparable to or less than simply buying data from the best known large systems, Landsat and SPOT. Each system is defined both by its observation capabilities and technical parameters of the system's design. Moreover, the systems are characterized in terms of the other potential benefits to developing economies, i.e., education of a technical workforce or applications of Earth imagery in solving national needs. Comparisons are provided with more traditional Earth observing satellites. NanoObservatoryTM is principally intended to serve as a developmental system to build general technical expertise space technology and Earth observation. MicroObservatoryTM takes the next step by focusing on a more sophisticated optical imag- ing camera while keeping the spacecraft systems simple and affordable. For both programs, AeroAstro is working with non- profit institutions to develop a corresponding program of technical participation with the nations that elect to pursue such programs. Dependent upon current capabilities, this might include the actual manufacture of selected components with the system. The status and development plans of both Observatories are discussed along with the established partnerships. 1

  6. Prediction of genomic breeding values for dairy traits in Italian Brown and Simmental bulls using a principal component approach.

    PubMed

    Pintus, M A; Gaspa, G; Nicolazzi, E L; Vicario, D; Rossoni, A; Ajmone-Marsan, P; Nardone, A; Dimauro, C; Macciotta, N P P

    2012-06-01

    The large number of markers available compared with phenotypes represents one of the main issues in genomic selection. In this work, principal component analysis was used to reduce the number of predictors for calculating genomic breeding values (GEBV). Bulls of 2 cattle breeds farmed in Italy (634 Brown and 469 Simmental) were genotyped with the 54K Illumina beadchip (Illumina Inc., San Diego, CA). After data editing, 37,254 and 40,179 single nucleotide polymorphisms (SNP) were retained for Brown and Simmental, respectively. Principal component analysis carried out on the SNP genotype matrix extracted 2,257 and 3,596 new variables in the 2 breeds, respectively. Bulls were sorted by birth year to create reference and prediction populations. The effect of principal components on deregressed proofs in reference animals was estimated with a BLUP model. Results were compared with those obtained by using SNP genotypes as predictors with either the BLUP or Bayes_A method. Traits considered were milk, fat, and protein yields, fat and protein percentages, and somatic cell score. The GEBV were obtained for prediction population by blending direct genomic prediction and pedigree indexes. No substantial differences were observed in squared correlations between GEBV and EBV in prediction animals between the 3 methods in the 2 breeds. The principal component analysis method allowed for a reduction of about 90% in the number of independent variables when predicting direct genomic values, with a substantial decrease in calculation time and without loss of accuracy. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    PubMed

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  8. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  9. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Experimental Investigation of Principal Residual Stress and Fatigue Performance for Turned Nickel-Based Superalloy Inconel 718.

    PubMed

    Hua, Yang; Liu, Zhanqiang

    2018-05-24

    Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.

  11. Principal component analysis for designed experiments.

    PubMed

    Konishi, Tomokazu

    2015-01-01

    Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.

  12. Coping with Multicollinearity: An Example on Application of Principal Components Regression in Dendroecology

    Treesearch

    B. Desta Fekedulegn; J.J. Colbert; R.R., Jr. Hicks; Michael E. Schuckers

    2002-01-01

    The theory and application of principal components regression, a method for coping with multicollinearity among independent variables in analyzing ecological data, is exhibited in detail. A concrete example of the complex procedures that must be carried out in developing a diagnostic growth-climate model is provided. We use tree radial increment data taken from breast...

  13. Application of Principal Component Analysis (PCA) to Reduce Multicollinearity Exchange Rate Currency of Some Countries in Asia Period 2004-2014

    ERIC Educational Resources Information Center

    Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad

    2017-01-01

    This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…

  14. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  15. Principal component analysis of Raman spectra for TiO2 nanoparticle characterization

    NASA Astrophysics Data System (ADS)

    Ilie, Alina Georgiana; Scarisoareanu, Monica; Morjan, Ion; Dutu, Elena; Badiceanu, Maria; Mihailescu, Ion

    2017-09-01

    The Raman spectra of anatase/rutile mixed phases of Sn doped TiO2 nanoparticles and undoped TiO2 nanoparticles, synthesised by laser pyrolysis, with nanocrystallite dimensions varying from 8 to 28 nm, was simultaneously processed with a self-written software that applies Principal Component Analysis (PCA) on the measured spectrum to verify the possibility of objective auto-characterization of nanoparticles from their vibrational modes. The photo-excited process of Raman scattering is very sensible to the material characteristics, especially in the case of nanomaterials, where more properties become relevant for the vibrational behaviour. We used PCA, a statistical procedure that performs eigenvalue decomposition of descriptive data covariance, to automatically analyse the sample's measured Raman spectrum, and to interfere the correlation between nanoparticle dimensions, tin and carbon concentration, and their Principal Component values (PCs). This type of application can allow an approximation of the crystallite size, or tin concentration, only by measuring the Raman spectrum of the sample. The study of loadings of the principal components provides information of the way the vibrational modes are affected by the nanoparticle features and the spectral area relevant for the classification.

  16. Testing for Non-Random Mating: Evidence for Ancestry-Related Assortative Mating in the Framingham Heart Study

    PubMed Central

    Sebro, Ronnie; Hoffman, Thomas J.; Lange, Christoph; Rogus, John J.; Risch, Neil J.

    2013-01-01

    Population stratification leads to a predictable phenomenon—a reduction in the number of heterozygotes compared to that calculated assuming Hardy-Weinberg Equilibrium (HWE). We show that population stratification results in another phenomenon—an excess in the proportion of spouse-pairs with the same genotypes at all ancestrally informative markers, resulting in ancestrally related positive assortative mating. We use principal components analysis to show that there is evidence of population stratification within the Framingham Heart Study, and show that the first principal component correlates with a North-South European cline. We then show that the first principal component is highly correlated between spouses (r=0.58, p=0.0013), demonstrating that there is ancestrally related positive assortative mating among the Framingham Caucasian population. We also show that the single nucleotide polymorphisms loading most heavily on the first principal component show an excess of homozygotes within the spouses, consistent with similar ancestry-related assortative mating in the previous generation. This nonrandom mating likely affects genetic structure seen more generally in the North American population of European descent today, and decreases the rate of decay of linkage disequilibrium for ancestrally informative markers. PMID:20842694

  17. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    PubMed

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p < 0.05) difference in sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  18. Statistical analysis of major ion and trace element geochemistry of water, 1986-2006, at seven wells transecting the freshwater/saline-water interface of the Edwards Aquifer, San Antonio, Texas

    USGS Publications Warehouse

    Mahler, Barbara J.

    2008-01-01

    The statistical analyses taken together indicate that the geochemistry at the freshwater-zone wells is more variable than that at the transition-zone wells. The geochemical variability at the freshwater-zone wells might result from dilution of ground water by meteoric water. This is indicated by relatively constant major ion molar ratios; a preponderance of positive correlations between SC, major ions, and trace elements; and a principal components analysis in which the major ions are strongly loaded on the first principal component. Much of the variability at three of the four transition-zone wells might result from the use of different laboratory analytical methods or reporting procedures during the period of sampling. This is reflected by a lack of correlation between SC and major ion concentrations at the transition-zone wells and by a principal components analysis in which the variability is fairly evenly distributed across several principal components. The statistical analyses further indicate that, although the transition-zone wells are less well connected to surficial hydrologic conditions than the freshwater-zone wells, there is some connection but the response time is longer. 

  19. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    PubMed Central

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  20. Time Management Ideas for Assistant Principals.

    ERIC Educational Resources Information Center

    Cronk, Jerry

    1987-01-01

    Prioritizing the use of time, effective communication, delegating authority, having detailed job descriptions, and good secretarial assistance are important components of time management for assistant principals. (MD)

  1. The principal components model: a model for advancing spirituality and spiritual care within nursing and health care practice.

    PubMed

    McSherry, Wilfred

    2006-07-01

    The aim of this study was to generate a deeper understanding of the factors and forces that may inhibit or advance the concepts of spirituality and spiritual care within both nursing and health care. This manuscript presents a model that emerged from a qualitative study using grounded theory. Implementation and use of this model may assist all health care practitioners and organizations to advance the concepts of spirituality and spiritual care within their own sphere of practice. The model has been termed the principal components model because participants identified six components as being crucial to the advancement of spiritual health care. Grounded theory was used meaning that there was concurrent data collection and analysis. Theoretical sampling was used to develop the emerging theory. These processes, along with data analysis, open, axial and theoretical coding led to the identification of a core category and the construction of the principal components model. Fifty-three participants (24 men and 29 women) were recruited and all consented to be interviewed. The sample included nurses (n=24), chaplains (n=7), a social worker (n=1), an occupational therapist (n=1), physiotherapists (n=2), patients (n=14) and the public (n=4). The investigation was conducted in three phases to substantiate the emerging theory and the development of the model. The principal components model contained six components: individuality, inclusivity, integrated, inter/intra-disciplinary, innate and institution. A great deal has been written on the concepts of spirituality and spiritual care. However, rhetoric alone will not remove some of the intrinsic and extrinsic barriers that are inhibiting the advancement of the spiritual dimension in terms of theory and practice. An awareness of and adherence to the principal components model may assist nurses and health care professionals to engage with and overcome some of the structural, organizational, political and social variables that are impacting upon spiritual care.

  2. Tree-space statistics and approximations for large-scale analysis of anatomical trees.

    PubMed

    Feragen, Aasa; Owen, Megan; Petersen, Jens; Wille, Mathilde M W; Thomsen, Laura H; Dirksen, Asger; de Bruijne, Marleen

    2013-01-01

    Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others (like the mean) can be computed, but efficient alternatives are helpful in speeding up algorithms that use means iteratively, like hypothesis testing. In this paper, we take advantage of a very large dataset (N = 8016) to obtain computable approximations, under the assumption that the data trees parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than healthy ones. Software is available from http://image.diku.dk/aasa/software.php.

  3. Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    BoŻek, Piotr

    2018-03-01

    The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.

  4. Analysis and improvement measures of flight delay in China

    NASA Astrophysics Data System (ADS)

    Zang, Yuhang

    2017-03-01

    Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.

  5. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  6. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  7. A comparative study of structural and conformational properties of casein kinase-1 isoforms: insights from molecular dynamics and principal component analysis.

    PubMed

    Singh, Surya Pratap; Gupta, Dwijendra K

    2015-04-21

    Wnt signaling pathway regulates several developmental processes in human; however recently this pathway has been associated with development of different types of cancers. Casein kinase-1 (CK1) constitutes a family of serine-threonine protein kinase; various members of this family participate in Wnt signal transduction pathway and serve as molecular switch to this pathway. Among the known six isoforms of CK1, in human, at least three isoforms (viz. alpha, delta and epsilon) have been reported as oncogenic. The development of common therapeutics against these kinases is an arduous task; unless we have the detailed information of their tertiary structures and conformational properties. In the present work, the dynamical and conformational properties for each of three isoforms of CK1 are explored through molecular dynamics (MD) simulations. The conformational space distribution of backbone atoms is evaluated using principal component analysis of MD data, which are further validated on the basis of potential energy surface. Based on these analytics, it is suggested that conformational subspace shifts upon binding to ligands and guides the kinase action of CK1 isoforms. Further, this paper as a first effort to concurrently study all the three isoforms of CK1 provides structural basis for development of common anticancer therapeutics against three isoforms of CK1. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Plant Invasions in China – Challenges and Chances

    PubMed Central

    Axmacher, Jan C.; Sang, Weiguo

    2013-01-01

    Invasive species cause serious environmental and economic harm and threaten global biodiversity. We set out to investigate how quickly invasive plant species are currently spreading in China and how their resulting distribution patterns are linked to socio-economic and environmental conditions. A comparison of the invasive plant species density (log species/log area) reported in 2008 with current data shows that invasive species were originally highly concentrated in the wealthy, southeastern coastal provinces of China, but they are currently rapidly spreading inland. Linear regression models based on the species density and turnover of invasive plants as dependent parameters and principal components representing key socio-economic and environmental parameters as predictors indicate strong positive links between invasive plant density and the overall phytodiversity and associated climatic parameters. Principal components representing socio-economic factors and endemic plant density also show significant positive links with invasive plant density. Urgent control and eradication measures are needed in China's coastal provinces to counteract the rapid inland spread of invasive plants. Strict controls of imports through seaports need to be accompanied by similarly strict controls of the developing horticultural trade and underpinned by awareness campaigns for China's increasingly affluent population to limit the arrival of new invaders. Furthermore, China needs to fully utilize its substantial native phytodiversity, rather than relying on exotics, in current large-scale afforestation projects and in the creation of urban green spaces. PMID:23691164

  9. Initial proposition of kinematics model for selected karate actions analysis

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.

    2017-03-01

    The motivation for this paper is to initially propose and evaluate two new kinematics models that were developed to describe motion capture (MoCap) data of karate techniques. We decided to develop this novel proposition to create the model that is capable to handle actions description both from multimedia and professional MoCap hardware. For the evaluation purpose we have used 25-joints data with karate techniques recordings acquired with Kinect version 2. It is consisted of MoCap recordings of two professional sport (black belt) instructors and masters of Oyama Karate. We have selected following actions for initial analysis: left-handed furi-uchi punch, right leg hiza-geri kick, right leg yoko-geri kick and left-handed jodan-uke block. Basing on evaluation we made we can conclude that both proposed kinematics models seems to be convenient method for karate actions description. From two proposed variables models it seems that global might be more useful for further usage. We think that because in case of considered punches variables seems to be less correlated and they might also be easier to interpret because of single reference coordinate system. Also principal components analysis proved to be reliable way to examine the quality of kinematics models and with the plot of the variable in principal components space we can nicely present the dependences between variables.

  10. Construction and comparison of gene co-expression networks shows complex plant immune responses

    PubMed Central

    López, Camilo; López-Kleine, Liliana

    2014-01-01

    Gene co-expression networks (GCNs) are graphic representations that depict the coordinated transcription of genes in response to certain stimuli. GCNs provide functional annotations of genes whose function is unknown and are further used in studies of translational functional genomics among species. In this work, a methodology for the reconstruction and comparison of GCNs is presented. This approach was applied using gene expression data that were obtained from immunity experiments in Arabidopsis thaliana, rice, soybean, tomato and cassava. After the evaluation of diverse similarity metrics for the GCN reconstruction, we recommended the mutual information coefficient measurement and a clustering coefficient-based method for similarity threshold selection. To compare GCNs, we proposed a multivariate approach based on the Principal Component Analysis (PCA). Branches of plant immunity that were exemplified by each experiment were analyzed in conjunction with the PCA results, suggesting both the robustness and the dynamic nature of the cellular responses. The dynamic of molecular plant responses produced networks with different characteristics that are differentiable using our methodology. The comparison of GCNs from plant pathosystems, showed that in response to similar pathogens plants could activate conserved signaling pathways. The results confirmed that the closeness of GCNs projected on the principal component space is an indicative of similarity among GCNs. This also can be used to understand global patterns of events triggered during plant immune responses. PMID:25320678

  11. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  12. Magnetorheological Fluids-Earth Applications Video

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Principal investigator Alice Gast describes magnetorheological (MR) fluids and how they differ from other fluids, such as blood or milk. Gast is the principal investigator for Investigating the structure of Paramagnetic Aggregates from Colloidal Emulsions (InSPACE), which was conducted by the Expedition 6 crew onboard the International Space Station (ISS). The goal of inSPACE is to determine the true three-dimensional (3-D) low energy (equilibrium) structure of the MR fluids in a periodically interrupted magnetic field. Applications for MR fluids could include electrical clutches, brakes, robotic devices, seat suspension systems, and shock absorbers.

  13. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  14. The Artistic Nature of the High School Principal.

    ERIC Educational Resources Information Center

    Ritschel, Robert E.

    The role of high school principals can be compared to that of composers of music. For instance, composers put musical components together into a coherent whole; similarly, principals organize high schools by establishing class schedules, assigning roles to subordinates, and maintaining a safe and orderly learning environment. Second, composers…

  15. Collaborative Relationships between Principals and School Counselors: Facilitating a Model for Developing a Working Alliance

    ERIC Educational Resources Information Center

    Odegard-Koester, Melissa A.; Watkins, Paul

    2016-01-01

    The working relationship between principals and school counselors have received some attention in the literature, however, little empirical research exists that examines specifically the components that facilitate a collaborative working relationship between the principal and school counselor. This qualitative case study examined the unique…

  16. The Retention and Attrition of Catholic School Principals

    ERIC Educational Resources Information Center

    Durow, W. Patrick; Brock, Barbara L.

    2004-01-01

    This article reports the results of a study of the retention of principals in Catholic elementary and secondary schools in one Midwestern diocese. Findings revealed that personal needs, career advancement, support from employer, and clearly defined role expectations were key factors in principals' retention decisions. A profile of components of…

  17. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. KSC-2012-4551

    NASA Image and Video Library

    2012-08-20

    CAPE CANAVERAL, Fla. - A mission science briefing was held at NASA Kennedy Space Center’s Press Site in Florida for the Radiation Belt Storm Probes, or RBSP, mission. From left, are George Diller, public affairs specialist and news conference moderator, Mona Kessel, RBSP program scientist from NASA Headquarters in Washington, Nicola Fox, RBSP deputy project scientist at Johns Hopkins Applied Physics Laboratory in Laurel, Md., Craig Kletzing, principal investigator from the University of Iowa, Harlan Spence, principal investigator from the University of New Hampshire, and Lou Lanzerotti, principal investigator from the New Jersey Institute of Technology. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 24. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Glenn Benson

  19. KSC-2012-4548

    NASA Image and Video Library

    2012-08-20

    CAPE CANAVERAL, Fla. - A mission science briefing was held at NASA Kennedy Space Center’s Press Site in Florida for the Radiation Belt Storm Probes, or RBSP, mission. From left, are George Diller, public affairs specialist and news conference moderator, Mona Kessel, RBSP program scientist from NASA Headquarters in Washington, Nicola Fox, RBSP deputy project scientist at Johns Hopkins Applied Physics Laboratory in Laurel, Md., Craig Kletzing, principal investigator from the University of Iowa, Harlan Spence, principal investigator from the University of New Hampshire, and Lou Lanzerotti, principal investigator from the New Jersey Institute of Technology. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 24. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Glenn Benson

  20. KSC-2012-4549

    NASA Image and Video Library

    2012-08-20

    CAPE CANAVERAL, Fla. - A mission science briefing was held at NASA Kennedy Space Center’s Press Site in Florida for the Radiation Belt Storm Probes, or RBSP, mission. From left, are George Diller, public affairs specialist and news conference moderator, Mona Kessel, RBSP program scientist from NASA Headquarters in Washington, Nicola Fox, RBSP deputy project scientist at Johns Hopkins Applied Physics Laboratory in Laurel, Md., Craig Kletzing, principal investigator from the University of Iowa, Harlan Spence, principal investigator from the University of New Hampshire, and Lou Lanzerotti, principal investigator from the New Jersey Institute of Technology. NASA’s RBSP mission will help us understand the sun’s influence on Earth and near-Earth space by studying the Earth’s radiation belts on various scales of space and time. RBSP will begin its mission of exploration of Earth’s Van Allen radiation belts and the extremes of space weather after its launch aboard an Atlas V rocket. Launch is targeted for Aug. 24. For more information, visit http://www.nasa.gov/rbsp. Photo credit: NASA/Glenn Benson

  1. The Psychometric Assessment of Children with Learning Disabilities: An Index Derived from a Principal Components Analysis of the WISC-R.

    ERIC Educational Resources Information Center

    Lawson, J. S.; Inglis, James

    1984-01-01

    A learning disability index (LDI) for the assessment of intellectual deficits on the Wechsler Intelligence Scale for Children-Revised (WISC-R) is described. The Factor II score coefficients derived from an unrotated principal components analysis of the WISC-R normative data, in combination with the individual's scaled scores, are used for this…

  2. International programs

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Brief summaries are given of NASA's participation in international space programs. This participation can be categorized in five principal areas: manned space flight, space sciences, space applications, ground support of space operations, and cooperative international aeronautics research. All projects are carried out on a cooperative or reimbursable basis.

  3. Perturbation analyses of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.

  4. Perturbation analyses of intermolecular interactions.

    PubMed

    Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.

  5. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  6. [Role of school lunch in primary school education: a trial analysis of school teachers' views using an open-ended questionnaire].

    PubMed

    Inayama, T; Kashiwazaki, H; Sakamoto, M

    1998-12-01

    We tried to analyze synthetically teachers' view points associated with health education and roles of school lunch in primary education. For this purpose, a survey using an open-ended questionnaire consisting of eight items relating to health education in the school curriculum was carried out in 100 teachers of ten public primary schools. Subjects were asked to describe their view regarding the following eight items: 1) health and physical guidance education, 2) school lunch guidance education, 3) pupils' attitude toward their own health and nutrition, 4) health education, 5) role of school lunch in education, 6) future subjects of health education, 7) class room lesson related to school lunch, 8) guidance in case of pupil with unbalanced dieting and food avoidance. Subjects described their own opinions on an open-ended questionnaire response sheet. Keywords in individual descriptions were selected, rearranged and classified into categories according to their own meanings, and each of the selected keywords were used as the dummy variable. To assess individual opinions synthetically, a principal component analysis was then applied to the variables collected through the teachers' descriptions, and four factors were extracted. The results were as follows. 1) Four factors obtained from the repeated principal component analysis were summarized as; roles of health education and school lunch program (the first principal component), cooperation with nurse-teachers and those in charge of lunch service (the second principal component), time allocation for health education in home-room activity and lunch time (the third principal component) and contents of health education and school lunch guidance and their future plan (the fourth principal component). 2) Teachers regarded the role of school lunch in primary education as providing daily supply of nutrients, teaching of table manners and building up friendships with classmates, health education and food and nutrition education, and developing food preferences through eating lunch together with classmates. 3) Significant positive correlation was observed between "the teachers' opinion about the role of school lunch of providing opportunity to learn good behavior for food preferences through eating lunch together with classmates" and the first principal component "roles of health education and school lunch program" (r = 0.39, p < 0.01). The variable "the role of school lunch is health education and food and nutrition education" showed positive correlation with the principle component "cooperation with nurse-teachers and those in charge of lunch service" (r = 0.27, p < 0.01). Interesting relationships obtained were that teachers with longer educational experience tended to place importance in health education and food and nutrition education as the role of school lunch, and that male teachers regarded the roles of school lunch more importantly for future education in primary education than female teachers did.

  7. Phenomenology of mixed states: a principal component analysis study.

    PubMed

    Bertschy, G; Gervasoni, N; Favre, S; Liberek, C; Ragama-Pardos, E; Aubry, J-M; Gex-Fabry, M; Dayer, A

    2007-12-01

    To contribute to the definition of external and internal limits of mixed states and study the place of dysphoric symptoms in the psychopathology of mixed states. One hundred and sixty-five inpatients with major mood episodes were diagnosed as presenting with either pure depression, mixed depression (depression plus at least three manic symptoms), full mixed state (full depression and full mania), mixed mania (mania plus at least three depressive symptoms) or pure mania, using an adapted version of the Mini International Neuropsychiatric Interview (DSM-IV version). They were evaluated using a 33-item inventory of depressive, manic and mixed affective signs and symptoms. Principal component analysis without rotation yielded three components that together explained 43.6% of the variance. The first component (24.3% of the variance) contrasted typical depressive symptoms with typical euphoric, manic symptoms. The second component, labeled 'dysphoria', (13.8%) had strong positive loadings for irritability, distressing sensitivity to light and noise, impulsivity and inner tension. The third component (5.5%) included symptoms of insomnia. Median scores for the first component significantly decreased from the pure depression group to the pure mania group. For the dysphoria component, scores were highest among patients with full mixed states and decreased towards both patients with pure depression and those with pure mania. Principal component analysis revealed that dysphoria represents an important dimension of mixed states.

  8. Local gravity disturbance estimation from multiple-high-single-low satellite-to-satellite tracking

    NASA Technical Reports Server (NTRS)

    Jekeli, Christopher

    1989-01-01

    The idea of satellite-to-satellite tracking in the high-low mode has received renewed attention in light of the uncertain future of NASA's proposed low-low mission, Geopotential Research Mission (GRM). The principal disadvantage with a high-low system is the increased time interval required to obtain global coverage since the intersatellite visibility is often obscured by Earth. The U.S. Air Force has begun to investigate high-low satellite-to-satellite tracking between the Global Positioning System (GPS) of satellites (high component) and NASA's Space Transportation System (STS), the shuttle (low component). Because the GPS satellites form, or will form, a constellation enabling continuous three-dimensional tracking of a low-altitude orbiter, there will be no data gaps due to lack of intervisibility. Furthermore, all three components of the gravitation vector are estimable at altitude, a given grid of which gives a stronger estimate of gravity on Earth's surface than a similar grid of line-of-sight gravitation components. The proposed Air Force mission is STAGE (Shuttle-GPS Tracking for Anomalous Gravitation Estimation) and is designed for local gravity field determinations since the shuttle will likely not achieve polar orbits. The motivation for STAGE was the feasibility to obtain reasonable accuracies with absolutely minimal cost. Instead of simulating drag-free orbits, STAGE uses direct measurements of the nongravitational forces obtained by an inertial package onboard the shuttle. The sort of accuracies that would be achievable from STAGE vis-a-vis other satellite tracking missions such as GRM and European Space Agency's POPSAT-GRM are analyzed.

  9. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    DOE PAGES

    Chang, Yu-Yen; Chao, Rikon; Wang, Wei-Hao; ...

    2012-01-01

    Disney emore » t al. (2008) have found a striking correlation among global parameters of H i -selected galaxies and concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L -band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color ( g - i ) is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color ( i - J ) shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.« less

  10. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  11. Page segmentation using script identification vectors: A first look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Cannon, M.; Kelly, P.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less

  12. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  13. Influence of Multidimensionality on Convergence of Sampling in Protein Simulation

    NASA Astrophysics Data System (ADS)

    Metsugi, Shoichi

    2005-06-01

    We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

  14. Enhanced beam coupling modulation using the polarization properties of photorefractive GaAs

    NASA Technical Reports Server (NTRS)

    Partovi, Afshin; Garmire, Elsa M.; Cheng, Li-Jen

    1987-01-01

    Observation is reported of a rotation in the polarization of the two photorefractive recording beams in GaAs for a configuration with the internally generated space-charge field along the line 110 crystallographic orientation. This rotation is a result of simultaneous constructive and destructive beam coupling in each beam for the optical electric field components along the two electrooptically induced principal dielectric axes of the crystal. By turning one of the beams on and off, the intensity of the other beam after the crystal and a polarization analyzer can be modulated by as much as 500 percent. This result is of particular importance for optical information processing applications.

  15. Overview of NASA Glenn Research Center Programs in Aero-Heat Transfer and Future Needs

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    This presentation concentrates on an overview of the NASA Glenn Research Center and the projects that are supporting Turbine Aero-Heat Transfer Research. The principal areas include the Ultra Efficient Engine Technology (UEET) Project, the Advanced Space Transportation Program (ASTP) Revolutionary Turbine Accelerator (RTA) Turbine Based Combined Cycle (TBCC) project, and the Propulsion & Power Base R&T - Smart Efficient Components (SEC), and Revolutionary Aeropropulsion Concepts (RAC) Projects. In addition, highlights are presented of the turbine aero-heat transfer work currently underway at NASA Glenn, focusing on the use of the Glenn-HT Navier- Stokes code as the vehicle for research in turbulence & transition modeling, grid topology generation, unsteady effects, and conjugate heat transfer.

  16. STS-52 PS MacLean, backup PS Tryggvason, and PI pose on JSC's CCT flight deck

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-52 Columbia, Orbiter Vehicle (OV) 102, Canadian Payload Specialist (PS) Steven G. MacLean (left) and backup Payload Specialist Bjarni V. Tryggvason (right) take a break from a camera training session in JSC's Crew Compartment Trainer (CCT). The two Canadian Space Agency (CSA) representatives pose on the CCT's aft flight deck with Canadian scientist David Zimick, the principal investigator (PI) for the materials experiment in low earth orbit (MELEO). MELEO is a component of the CANEX-2 experiment package, manifest to fly on the scheduled October 1992 STS-52 mission. The CCT is part of the shuttle Mockup and Integration Laboratory (MAIL) Bldg 9NE.

  17. Simultaneous Retrieval of Temperature, Water Vapor and Ozone Atmospheric Profiles from IASI: Compression, De-noising, First Guess Retrieval and Inversion Algorithms

    NASA Technical Reports Server (NTRS)

    Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.

  18. HT-FRTC: a fast radiative transfer code using kernel regression

    NASA Astrophysics Data System (ADS)

    Thelen, Jean-Claude; Havemann, Stephan; Lewis, Warren

    2016-09-01

    The HT-FRTC is a principal component based fast radiative transfer code that can be used across the electromagnetic spectrum from the microwave through to the ultraviolet to calculate transmittance, radiance and flux spectra. The principal components cover the spectrum at a very high spectral resolution, which allows very fast line-by-line, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. The principal components are derived during a code training phase from line-by-line simulations for a diverse set of atmosphere and surface conditions. The derived principal components are sensor independent, i.e. no extra training is required to include additional sensors. During the training phase we also derive the predictors which are required by the fast radiative transfer code to determine the principal component scores from the monochromatic radiances (or fluxes, transmittances). These predictors are calculated for each training profile at a small number of frequencies, which are selected by a k-means cluster algorithm during the training phase. Until recently the predictors were calculated using a linear regression. However, during a recent rewrite of the code the linear regression was replaced by a Gaussian Process (GP) regression which resulted in a significant increase in accuracy when compared to the linear regression. The HT-FRTC has been trained with a large variety of gases, surface properties and scatterers. Rayleigh scattering as well as scattering by frozen/liquid clouds, hydrometeors and aerosols have all been included. The scattering phase function can be fully accounted for by an integrated line-by-line version of the Edwards-Slingo spherical harmonics radiation code or approximately by a modification to the extinction (Chou scaling).

  19. Classification and identification of Rhodobryum roseum Limpr. and its adulterants based on fourier-transform infrared spectroscopy (FTIR) and chemometrics.

    PubMed

    Cao, Zhen; Wang, Zhenjie; Shang, Zhonglin; Zhao, Jiancheng

    2017-01-01

    Fourier-transform infrared spectroscopy (FTIR) with the attenuated total reflectance technique was used to identify Rhodobryum roseum from its four adulterants. The FTIR spectra of six samples in the range from 4000 cm-1 to 600 cm-1 were obtained. The second-derivative transformation test was used to identify the small and nearby absorption peaks. A cluster analysis was performed to classify the spectra in a dendrogram based on the spectral similarity. Principal component analysis (PCA) was used to classify the species of six moss samples. A cluster analysis with PCA was used to identify different genera. However, some species of the same genus exhibited highly similar chemical components and FTIR spectra. Fourier self-deconvolution and discrete wavelet transform (DWT) were used to enhance the differences among the species with similar chemical components and FTIR spectra. Three scales were selected as the feature-extracting space in the DWT domain. The results show that FTIR spectroscopy with chemometrics is suitable for identifying Rhodobryum roseum and its adulterants.

  20. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  1. Principal component analysis of indocyanine green fluorescence dynamics for diagnosis of vascular diseases

    NASA Astrophysics Data System (ADS)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Choi, Chulhee

    2015-03-01

    Indocyanine green (ICG), a near-infrared fluorophore, has been used in visualization of vascular structure and non-invasive diagnosis of vascular disease. Although many imaging techniques have been developed, there are still limitations in diagnosis of vascular diseases. We have recently developed a minimally invasive diagnostics system based on ICG fluorescence imaging for sensitive detection of vascular insufficiency. In this study, we used principal component analysis (PCA) to examine ICG spatiotemporal profile and to obtain pathophysiological information from ICG dynamics. Here we demonstrated that principal components of ICG dynamics in both feet showed significant differences between normal control and diabetic patients with vascula complications. We extracted the PCA time courses of the first three components and found distinct pattern in diabetic patient. We propose that PCA of ICG dynamics reveal better classification performance compared to fluorescence intensity analysis. We anticipate that specific feature of spatiotemporal ICG dynamics can be useful in diagnosis of various vascular diseases.

  2. Leadership Coaching: A Multiple-Case Study of Urban Public Charter School Principals' Experiences

    ERIC Educational Resources Information Center

    Lackritz, Anne D.

    2017-01-01

    This multi-case study seeks to understand the experiences of New York City and Washington, DC public charter school principals who have experienced leadership coaching, a component of leadership development, beyond their novice years. The research questions framing this study address how experienced public charter school principals describe the…

  3. The View from the Principal's Office: An Observation Protocol Boosts Literacy :eadership

    ERIC Educational Resources Information Center

    Novak, Sandi; Houck, Bonnie

    2016-01-01

    The Minnesota Elementary School Principals' Association offered Minnesota principals professional learning that placed a high priority on literacy instruction and developing a collegial culture. A key component is the literacy classroom visit, an observation protocol used to gather data to determine the status of literacy teaching and student…

  4. Administrative Obstacles to Technology Use in West Virginia Public Schools: A Survey of West Virginia Principals

    ERIC Educational Resources Information Center

    Agnew, David W.

    2011-01-01

    Public school principals must meet many challenges and make decisions concerning financial obligations while providing the best learning environment for students. A major challenge to principals is implementing technological components successfully while providing teachers the 21st century instructional skills needed to enhance students'…

  5. Differential principal component analysis of ChIP-seq.

    PubMed

    Ji, Hongkai; Li, Xia; Wang, Qian-fei; Ning, Yang

    2013-04-23

    We propose differential principal component analysis (dPCA) for analyzing multiple ChIP-sequencing datasets to identify differential protein-DNA interactions between two biological conditions. dPCA integrates unsupervised pattern discovery, dimension reduction, and statistical inference into a single framework. It uses a small number of principal components to summarize concisely the major multiprotein synergistic differential patterns between the two conditions. For each pattern, it detects and prioritizes differential genomic loci by comparing the between-condition differences with the within-condition variation among replicate samples. dPCA provides a unique tool for efficiently analyzing large amounts of ChIP-sequencing data to study dynamic changes of gene regulation across different biological conditions. We demonstrate this approach through analyses of differential chromatin patterns at transcription factor binding sites and promoters as well as allele-specific protein-DNA interactions.

  6. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  7. Measurement of Scenic Spots Sustainable Capacity Based on PCA-Entropy TOPSIS: A Case Study from 30 Provinces, China

    PubMed Central

    Liang, Xuedong; Liu, Canmian; Li, Zhi

    2017-01-01

    In connection with the sustainable development of scenic spots, this paper, with consideration of resource conditions, economic benefits, auxiliary industry scale and ecological environment, establishes a comprehensive measurement model of the sustainable capacity of scenic spots; optimizes the index system by principal components analysis to extract principal components; assigns the weight of principal components by entropy method; analyzes the sustainable capacity of scenic spots in each province of China comprehensively in combination with TOPSIS method and finally puts forward suggestions aid decision-making. According to the study, this method provides an effective reference for the study of the sustainable development of scenic spots and is very significant for considering the sustainable development of scenic spots and auxiliary industries to establish specific and scientific countermeasures for improvement. PMID:29271947

  8. The variance needed to accurately describe jump height from vertical ground reaction force data.

    PubMed

    Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran

    2014-12-01

    In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.

  9. Measurement of Scenic Spots Sustainable Capacity Based on PCA-Entropy TOPSIS: A Case Study from 30 Provinces, China.

    PubMed

    Liang, Xuedong; Liu, Canmian; Li, Zhi

    2017-12-22

    In connection with the sustainable development of scenic spots, this paper, with consideration of resource conditions, economic benefits, auxiliary industry scale and ecological environment, establishes a comprehensive measurement model of the sustainable capacity of scenic spots; optimizes the index system by principal components analysis to extract principal components; assigns the weight of principal components by entropy method; analyzes the sustainable capacity of scenic spots in each province of China comprehensively in combination with TOPSIS method and finally puts forward suggestions aid decision-making. According to the study, this method provides an effective reference for the study of the sustainable development of scenic spots and is very significant for considering the sustainable development of scenic spots and auxiliary industries to establish specific and scientific countermeasures for improvement.

  10. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Identification and visualization of dominant patterns and anomalies in remotely sensed vegetation phenology using a parallel tool for principal components analysis

    Treesearch

    Richard Tran Mills; Jitendra Kumar; Forrest M. Hoffman; William W. Hargrove; Joseph P. Spruce; Steven P. Norman

    2013-01-01

    We investigated the use of principal components analysis (PCA) to visualize dominant patterns and identify anomalies in a multi-year land surface phenology data set (231 m × 231 m normalized difference vegetation index (NDVI) values derived from the Moderate Resolution Imaging Spectroradiometer (MODIS)) used for detecting threats to forest health in the conterminous...

  12. WALLY 1 ...A large, principal components regression program with varimax rotation of the factor weight matrix

    Treesearch

    James R. Wallis

    1965-01-01

    Written in Fortran IV and MAP, this computer program can handle up to 120 variables, and retain 40 principal components. It can perform simultaneous regression of up to 40 criterion variables upon the varimax rotated factor weight matrix. The columns and rows of all output matrices are labeled by six-character alphanumeric names. Data input can be from punch cards or...

  13. The rate of change in declining steroid hormones: a new parameter of healthy aging in men?

    PubMed

    Walther, Andreas; Philipp, Michel; Lozza, Niclà; Ehlert, Ulrike

    2016-09-20

    Research on healthy aging in men has increasingly focused on age-related hormonal changes. Testosterone (T) decline is primarily investigated, while age-related changes in other sex steroids (dehydroepiandrosterone [DHEA], estradiol [E2], progesterone [P]) are mostly neglected. An integrated hormone parameter reflecting aging processes in men has yet to be identified. 271 self-reporting healthy men between 40 and 75 provided both psychometric data and saliva samples for hormone analysis. Correlation analysis between age and sex steroids revealed negative associations for the four sex steroids (T, DHEA, E2, and P). Principal component analysis including ten salivary analytes identified a principal component mainly unifying the variance of the four sex steroid hormones. Subsequent principal component analysis including the four sex steroids extracted the principal component of declining steroid hormones (DSH). Moderation analysis of the association between age and DSH revealed significant moderation effects for psychosocial factors such as depression, chronic stress and perceived general health. In conclusion, these results provide further evidence that sex steroids decline in aging men and that the integrated hormone parameter DSH and its rate of change can be used as biomarkers for healthy aging in men. Furthermore, the negative association of age and DSH is moderated by psychosocial factors.

  14. The rate of change in declining steroid hormones: a new parameter of healthy aging in men?

    PubMed Central

    Walther, Andreas; Philipp, Michel; Lozza, Niclà; Ehlert, Ulrike

    2016-01-01

    Research on healthy aging in men has increasingly focused on age-related hormonal changes. Testosterone (T) decline is primarily investigated, while age-related changes in other sex steroids (dehydroepiandrosterone [DHEA], estradiol [E2], progesterone [P]) are mostly neglected. An integrated hormone parameter reflecting aging processes in men has yet to be identified. 271 self-reporting healthy men between 40 and 75 provided both psychometric data and saliva samples for hormone analysis. Correlation analysis between age and sex steroids revealed negative associations for the four sex steroids (T, DHEA, E2, and P). Principal component analysis including ten salivary analytes identified a principal component mainly unifying the variance of the four sex steroid hormones. Subsequent principal component analysis including the four sex steroids extracted the principal component of declining steroid hormones (DSH). Moderation analysis of the association between age and DSH revealed significant moderation effects for psychosocial factors such as depression, chronic stress and perceived general health. In conclusion, these results provide further evidence that sex steroids decline in aging men and that the integrated hormone parameter DSH and its rate of change can be used as biomarkers for healthy aging in men. Furthermore, the negative association of age and DSH is moderated by psychosocial factors. PMID:27589836

  15. Statistical classification of hydrogeologic regions in the fractured rock area of Maryland and parts of the District of Columbia, Virginia, West Virginia, Pennsylvania, and Delaware

    USGS Publications Warehouse

    Fleming, Brandon J.; LaMotte, Andrew E.; Sekellick, Andrew J.

    2013-01-01

    Hydrogeologic regions in the fractured rock area of Maryland were classified using geographic information system tools with principal components and cluster analyses. A study area consisting of the 8-digit Hydrologic Unit Code (HUC) watersheds with rivers that flow through the fractured rock area of Maryland and bounded by the Fall Line was further subdivided into 21,431 catchments from the National Hydrography Dataset Plus. The catchments were then used as a common hydrologic unit to compile relevant climatic, topographic, and geologic variables. A principal components analysis was performed on 10 input variables, and 4 principal components that accounted for 83 percent of the variability in the original data were identified. A subsequent cluster analysis grouped the catchments based on four principal component scores into six hydrogeologic regions. Two crystalline rock hydrogeologic regions, including large parts of the Washington, D.C. and Baltimore metropolitan regions that represent over 50 percent of the fractured rock area of Maryland, are distinguished by differences in recharge, Precipitation minus Potential Evapotranspiration, sand content in soils, and groundwater contributions to streams. This classification system will provide a georeferenced digital hydrogeologic framework for future investigations of groundwater availability in the fractured rock area of Maryland.

  16. Principal Component-Based Radiative Transfer Model (PCRTM) for Hyperspectral Sensors. Part I; Theoretical Concept

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen

    2005-01-01

    Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.

  17. Relationship between regional population and healthcare delivery in Japan.

    PubMed

    Niga, Takeo; Mori, Maiko; Kawahara, Kazuo

    2016-01-01

    In order to address regional inequality in healthcare delivery in Japan, healthcare districts were established in 1985. However, regional healthcare delivery has now become a national issue because of population migration and the aging population. In this study, the state of healthcare delivery at the district level is examined by analyzing population, the number of physicians, and the number of hospital beds. The results indicate a continuing disparity in healthcare delivery among districts. We find that the rate of change in population has a strong positive correlation with that in the number of physicians and a weak positive correlation with that in the number of hospital beds. In addition, principal component analysis is performed on three variables: the rate of change in population, the number of physicians per capita, and the number of hospital beds per capita. This analysis suggests that the two principal components contribute 90.1% of the information. The first principal component is thought to show the effect of the regulations on hospital beds. The second principal component is thought to show the capacity to recruit physicians. This study indicates that an adjustment to the regulations on hospital beds as well as physician allocation by public funds may be key to resolving the impending issue of regionally disproportionate healthcare delivery.

  18. Fluorescence fingerprint as an instrumental assessment of the sensory quality of tomato juices.

    PubMed

    Trivittayasil, Vipavee; Tsuta, Mizuki; Imamura, Yoshinori; Sato, Tsuneo; Otagiri, Yuji; Obata, Akio; Otomo, Hiroe; Kokawa, Mito; Sugiyama, Junichi; Fujita, Kaori; Yoshimura, Masatoshi

    2016-03-15

    Sensory analysis is an important standard for evaluating food products. However, as trained panelists and time are required for the process, the potential of using fluorescence fingerprint as a rapid instrumental method to approximate sensory characteristics was explored in this study. Thirty-five out of 44 descriptive sensory attributes were found to show a significant difference between samples (analysis of variance test). Principal component analysis revealed that principal component 1 could capture 73.84 and 75.28% variance for aroma category and combined flavor and taste category respectively. Fluorescence fingerprints of tomato juices consisted of two visible peaks at excitation/emission wavelengths of 290/350 and 315/425 nm and a long narrow emission peak at 680 nm. The 680 nm peak was only clearly observed in juices obtained from tomatoes cultivated to be eaten raw. The ability to predict overall sensory profiles was investigated by using principal component 1 as a regression target. Fluorescence fingerprint could predict principal component 1 of both aroma and combined flavor and taste with a coefficient of determination above 0.8. The results obtained in this study indicate the potential of using fluorescence fingerprint as an instrumental method for assessing sensory characteristics of tomato juices. © 2015 Society of Chemical Industry.

  19. Application of Hyperspectral Imaging and Chemometric Calibrations for Variety Discrimination of Maize Seeds

    PubMed Central

    Zhang, Xiaolei; Liu, Fei; He, Yong; Li, Xiaoli

    2012-01-01

    Hyperspectral imaging in the visible and near infrared (VIS-NIR) region was used to develop a novel method for discriminating different varieties of commodity maize seeds. Firstly, hyperspectral images of 330 samples of six varieties of maize seeds were acquired using a hyperspectral imaging system in the 380–1,030 nm wavelength range. Secondly, principal component analysis (PCA) and kernel principal component analysis (KPCA) were used to explore the internal structure of the spectral data. Thirdly, three optimal wavelengths (523, 579 and 863 nm) were selected by implementing PCA directly on each image. Then four textural variables including contrast, homogeneity, energy and correlation were extracted from gray level co-occurrence matrix (GLCM) of each monochromatic image based on the optimal wavelengths. Finally, several models for maize seeds identification were established by least squares-support vector machine (LS-SVM) and back propagation neural network (BPNN) using four different combinations of principal components (PCs), kernel principal components (KPCs) and textural features as input variables, respectively. The recognition accuracy achieved in the PCA-GLCM-LS-SVM model (98.89%) was the most satisfactory one. We conclude that hyperspectral imaging combined with texture analysis can be implemented for fast classification of different varieties of maize seeds. PMID:23235456

  20. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    PubMed

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.

  1. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  2. On the application of the Principal Component Analysis for an efficient climate downscaling of surface wind fields

    NASA Astrophysics Data System (ADS)

    Chavez, Roberto; Lozano, Sergio; Correia, Pedro; Sanz-Rodrigo, Javier; Probst, Oliver

    2013-04-01

    With the purpose of efficiently and reliably generating long-term wind resource maps for the wind energy industry, the application and verification of a statistical methodology for the climate downscaling of wind fields at surface level is presented in this work. This procedure is based on the combination of the Monte Carlo and the Principal Component Analysis (PCA) statistical methods. Firstly the Monte Carlo method is used to create a huge number of daily-based annual time series, so called climate representative years, by the stratified sampling of a 33-year-long time series corresponding to the available period of the NCAR/NCEP global reanalysis data set (R-2). Secondly the representative years are evaluated such that the best set is chosen according to its capability to recreate the Sea Level Pressure (SLP) temporal and spatial fields from the R-2 data set. The measure of this correspondence is based on the Euclidean distance between the Empirical Orthogonal Functions (EOF) spaces generated by the PCA (Principal Component Analysis) decomposition of the SLP fields from both the long-term and the representative year data sets. The methodology was verified by comparing the selected 365-days period against a 9-year period of wind fields generated by dynamical downscaling the Global Forecast System data with the mesoscale model SKIRON for the Iberian Peninsula. These results showed that, compared to the traditional method of dynamical downscaling any random 365-days period, the error in the average wind velocity by the PCA's representative year was reduced by almost 30%. Moreover the Mean Absolute Errors (MAE) in the monthly and daily wind profiles were also reduced by almost 25% along all SKIRON grid points. These results showed also that the methodology presented maximum error values in the wind speed mean of 0.8 m/s and maximum MAE in the monthly curves of 0.7 m/s. Besides the bulk numbers, this work shows the spatial distribution of the errors across the Iberian domain and additional wind statistics such as the velocity and directional frequency. Additional repetitions were performed to prove the reliability and robustness of this kind-of statistical-dynamical downscaling method.

  3. Real-time myoelectric control of a multi-fingered hand prosthesis using principal components analysis.

    PubMed

    Matrone, Giulia C; Cipriani, Christian; Carrozza, Maria Chiara; Magenes, Giovanni

    2012-06-15

    In spite of the advances made in the design of dexterous anthropomorphic hand prostheses, these sophisticated devices still lack adequate control interfaces which could allow amputees to operate them in an intuitive and close-to-natural way. In this study, an anthropomorphic five-fingered robotic hand, actuated by six motors, was used as a prosthetic hand emulator to assess the feasibility of a control approach based on Principal Components Analysis (PCA), specifically conceived to address this problem. Since it was demonstrated elsewhere that the first two principal components (PCs) can describe the whole hand configuration space sufficiently well, the controller here employed reverted the PCA algorithm and allowed to drive a multi-DoF hand by combining a two-differential channels EMG input with these two PCs. Hence, the novelty of this approach stood in the PCA application for solving the challenging problem of best mapping the EMG inputs into the degrees of freedom (DoFs) of the prosthesis. A clinically viable two DoFs myoelectric controller, exploiting two differential channels, was developed and twelve able-bodied participants, divided in two groups, volunteered to control the hand in simple grasp trials, using forearm myoelectric signals. Task completion rates and times were measured. The first objective (assessed through one group of subjects) was to understand the effectiveness of the approach; i.e., whether it is possible to drive the hand in real-time, with reasonable performance, in different grasps, also taking advantage of the direct visual feedback of the moving hand. The second objective (assessed through a different group) was to investigate the intuitiveness, and therefore to assess statistical differences in the performance throughout three consecutive days. Subjects performed several grasp, transport and release trials with differently shaped objects, by operating the hand with the myoelectric PCA-based controller. Experimental trials showed that the simultaneous use of the two differential channels paradigm was successful. This work demonstrates that the proposed two-DoFs myoelectric controller based on PCA allows to drive in real-time a prosthetic hand emulator into different prehensile patterns with excellent performance. These results open up promising possibilities for the development of intuitive, effective myoelectric hand controllers.

  4. The Application of Principal Component Analysis Using Fixed Eigenvectors to the Infrared Thermographic Inspection of the Space Shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.

    2006-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the good material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued when a fixed set of eigenvectors is used to process the thermal data from the RCC materials. These eigen vectors can be generated either from an analytic model of the thermal response of the material under examination, or from a large cross section of experimental data. This paper will provide the details of the analytic model; an overview of the PCA process; as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens. Details of a system that has been developed to allow insitu inspection of a majority of shuttle RCC components will be presented along with the acceptance test results for this system. Additionally, the results of applying this technology to the Space Shuttle Discovery after its return from flight will be presented.

  5. Far-infrared photometry of OJ 287 with the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kidger, Mark; Zola, Staszek; Valtonen, Mauri; Lähteenmäki, Anne; Järvelä, Emilia; Tornikoski, Merja; Tammi, Joni; Liakos, Alexis; Poyner, Gary

    2018-03-01

    Context. The blazar OJ 287 has shown a ≈12 year quasi-periodicity over more than a century, in addition to the common properties of violent variability in all frequency ranges. It is the strongest known candidate to have a binary singularity in its central engine. Aim. We aim to better understand the different emission components by searching for correlated variability in the flux over four decades of frequency measurements. Methods: We combined data at frequencies from the millimetric to the visible to characterise the multifrequency light curve in April and May 2010. This includes the only photometric observations of OJ 287 made with the Herschel Space Observatory: five epochs of data obtained over 33 days at 250, 350, and 500 μm with Herschel-SPIRE. Results: Although we find that the variability at 37 GHz on timescales of a few weeks correlates with the visible to near-IR spectral energy distribution, there is a small degree of reddening in the continuum at lower flux levels that is revealed by the decreasing rate of decline in the light curve at lower frequencies. However, we see no clear evidence that a rapid flare detected in the light curve during our monitoring in the visible to near-IR light curve is seen either in the Herschel data or at 37 GHz, suggesting a low-frequency cut-off in the spectrum of such flares. Conclusions.We see only marginal evidence of variability in the observations with Herschel over a month, although this may be principally due to the poor sampling. The spectral energy distribution between 37 GHz and the visible can be characterised by two components of approximately constant spectral index: a visible to far-IR component of spectral index α = -0.95, and a far-IR to millimetric spectral index of α = -0.43. There is no evidence of an excess of emission that would be consistent with the 60 μmdust bump found in many active galactic nuclei. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.The photometry data (Table 4) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A74 Warning, no authors found for 2018A&A...610L..17.

  6. Telerobotic Tending of Space Based Plant Growth Chamber

    NASA Technical Reports Server (NTRS)

    Backes, P. G.; Long, M. K.; Das, H.

    1994-01-01

    The kinematic design of a telerobotic mechanism for tending a plant growth space science experiment chamber is described. Ground based control of tending mechanisms internal to space science experiments will allow ground based principal investigators to interact directly with their space science experiments.

  7. The School Makes a Difference: Analysis of Teacher Perceptions of Their Principal and School Climate.

    ERIC Educational Resources Information Center

    Watson, Pat; And Others

    Survey responses from over half of Oklahoma City's 2,500 teachers indicated their views of the effectiveness and leadership of the city's 94 school principals. The survey's 82 items were selected from ideas suggested in the principal effectiveness literature and from the leadership component of Oklahoma City's prinipal evaluation forms. The…

  8. An Analysis of Principals' Ethical Decision Making Using Rest's Four Component Model of Moral Behavior.

    ERIC Educational Resources Information Center

    Klinker, JoAnn Franklin; Hackmann, Donald G.

    High school principals confront ethical dilemmas daily. This report describes a study that examined how MetLife/NASSP secondary principals of the year made ethical decisions conforming to three dispositions from Standard 5 of the ISLLC standards and whether they could identify processes used to reach those decisions through Rest's Four Component…

  9. The Middle Management Paradox of the Urban High School Assistant Principal: Making It Happen

    ERIC Educational Resources Information Center

    Jubilee, Sabriya Kaleen

    2013-01-01

    Scholars of transformational leadership literature assert that school-based management teams are a vital component in transforming schools. Many of these works focus heavily on the roles of principals and teachers, ignoring the contribution of Assistant Principals (APs). More attention is now being given to the unique role that Assistant…

  10. E-Mentoring for New Principals: A Case Study of a Mentoring Program

    ERIC Educational Resources Information Center

    Russo, Erin D.

    2013-01-01

    This descriptive case study includes both new principals and their mentor principals engaged in e-mentoring activities. This study examines the components of a school district's mentoring program in order to make sense of e-mentoring technology. The literature review highlights mentoring practices in education, and also draws upon e-mentoring…

  11. Orthogonal decomposition of left ventricular remodeling in myocardial infarction

    PubMed Central

    Zhang, Xingyu; Medrano-Gracia, Pau; Ambale-Venkatesh, Bharath; Bluemke, David A.; Cowan, Brett R; Finn, J. Paul; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Young, Alistair A.; Suinesiaputra, Avan

    2017-01-01

    Abstract Left ventricular size and shape are important for quantifying cardiac remodeling in response to cardiovascular disease. Geometric remodeling indices have been shown to have prognostic value in predicting adverse events in the clinical literature, but these often describe interrelated shape changes. We developed a novel method for deriving orthogonal remodeling components directly from any (moderately independent) set of clinical remodeling indices. Results: Six clinical remodeling indices (end-diastolic volume index, sphericity, relative wall thickness, ejection fraction, apical conicity, and longitudinal shortening) were evaluated using cardiac magnetic resonance images of 300 patients with myocardial infarction, and 1991 asymptomatic subjects, obtained from the Cardiac Atlas Project. Partial least squares (PLS) regression of left ventricular shape models resulted in remodeling components that were optimally associated with each remodeling index. A Gram–Schmidt orthogonalization process, by which remodeling components were successively removed from the shape space in the order of shape variance explained, resulted in a set of orthonormal remodeling components. Remodeling scores could then be calculated that quantify the amount of each remodeling component present in each case. A one-factor PLS regression led to more decoupling between scores from the different remodeling components across the entire cohort, and zero correlation between clinical indices and subsequent scores. Conclusions: The PLS orthogonal remodeling components had similar power to describe differences between myocardial infarction patients and asymptomatic subjects as principal component analysis, but were better associated with well-understood clinical indices of cardiac remodeling. The data and analyses are available from www.cardiacatlas.org. PMID:28327972

  12. NASA's Solar Dynamics Observatory Unveils New Images

    NASA Image and Video Library

    2010-04-20

    Madhulika Guhathakurta, far right, SDO Program Scientist at NASA Headquarters in Washington, speaks during a briefing to discuss recent images from NASA's Solar Dynamics Observatory, or SDO, Wednesday, April 21, 2010, at the Newseum in Washington. Pictured from left of Dr. Guhathakurta's are: Tom Woods, principal investigator, Extreme Ultraviolet Variability Experiment instrument, Laboratory for Atmospheric and Space Physics, University of Colorado in Boulder; Philip H. Scherrer, principal investigator, Helioseismic and Magnetic Imager instrument, Stanford University in Palo Alto; Alan Title, principal investigator, Atmospheric Imaging Assembly instrument, Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto and Dean Pesnell, SDO project scientist, Goddard Space Flight Center in Greenbelt, Md. Photo Credit: (NASA/Carla Cioffi)

  13. NASA's Solar Dynamics Observatory Unveils New Images

    NASA Image and Video Library

    2010-04-20

    Scientists involved in NASA's Solar Dynamics Observatory (SDO) mission attend a press conference to discuss recent images captured by the SDO spacecraft Wednesday, April 21, 2010, at the Newseum in Washington. Pictured right to left are: Madhulika Guhathakurta, SDO program scientist, NASA Headquarters in Washington; Tom Woods, principal investigator, Extreme Ultraviolet Variability Experiment instrument, Laboratory for Atmospheric and Space Physics, University of Colorado in Boulder; Philip H. Scherrer, principal investigator, Helioseismic and Magnetic Imager instrument, Stanford University in Palo Alto; Alan Title, principal investigator, Atmospheric Imaging Assembly instrument, Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto and Dean Pesnell, SDO project scientist, Goddard Space Flight Center in Greenbelt, Md. Photo Credit: (NASA/Carla Cioffi)

  14. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    PubMed

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Describing patterns of weight changes using principal components analysis: results from the Action for Health in Diabetes (Look AHEAD) research group.

    PubMed

    Espeland, Mark A; Bray, George A; Neiberg, Rebecca; Rejeski, W Jack; Knowler, William C; Lang, Wei; Cheskin, Lawrence J; Williamson, Don; Lewis, C Beth; Wing, Rena

    2009-10-01

    To demonstrate how principal components analysis can be used to describe patterns of weight changes in response to an intensive lifestyle intervention. Principal components analysis was applied to monthly percent weight changes measured on 2,485 individuals enrolled in the lifestyle arm of the Action for Health in Diabetes (Look AHEAD) clinical trial. These individuals were 45 to 75 years of age, with type 2 diabetes and body mass indices greater than 25 kg/m(2). Associations between baseline characteristics and weight loss patterns were described using analyses of variance. Three components collectively accounted for 97.0% of total intrasubject variance: a gradually decelerating weight loss (88.8%), early versus late weight loss (6.6%), and a mid-year trough (1.6%). In agreement with previous reports, each of the baseline characteristics we examined had statistically significant relationships with weight loss patterns. As examples, males tended to have a steeper trajectory of percent weight loss and to lose weight more quickly than women. Individuals with higher hemoglobin A(1c) (glycosylated hemoglobin; HbA(1c)) tended to have a flatter trajectory of percent weight loss and to have mid-year troughs in weight loss compared to those with lower HbA(1c). Principal components analysis provided a coherent description of characteristic patterns of weight changes and is a useful vehicle for identifying their correlates and potentially for predicting weight control outcomes.

  16. Migration of scattered teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Rondenay, S.

    1999-06-01

    The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.

  17. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  18. Principal components analysis of the Neurobehavioral Symptom Inventory in a nonclinical civilian sample.

    PubMed

    Sullivan, Karen A; Lurie, Janine K

    2017-01-01

    The study examined the component structure of the Neurobehavioral Symptom Inventory (NSI) under five different models. The evaluated models comprised the full NSI (NSI-22) and the NSI-20 (NSI minus two orphan items). A civilian nonclinical sample was used. The 575 volunteers were predominantly university students who screened negative for mild TBI. The study design was cross-sectional, with questionnaires administered online. The main measure was the Neurobehavioral Symptom Inventory. Subscale, total and embedded validity scores were derived (the Validity-10, the LOW6, and the NIM5). In both models, the principal components analysis yielded two intercorrelated components (psychological and somatic/sensory) with acceptable internal consistency (alphas > 0.80). In this civilian nonclinical sample, the NSI had two underlying components. These components represent psychological and somatic/sensory neurobehavioral symptoms.

  19. Protein quantification on dendrimer-activated surfaces by using time-of-flight secondary ion mass spectrometry and principal component regression

    NASA Astrophysics Data System (ADS)

    Kim, Young-Pil; Hong, Mi-Young; Shon, Hyun Kyong; Chegal, Won; Cho, Hyun Mo; Moon, Dae Won; Kim, Hak-Sung; Lee, Tae Geol

    2008-12-01

    Interaction between streptavidin and biotin on poly(amidoamine) (PAMAM) dendrimer-activated surfaces and on self-assembled monolayers (SAMs) was quantitatively studied by using time-of-flight secondary ion mass spectrometry (ToF-SIMS). The surface protein density was systematically varied as a function of protein concentration and independently quantified using the ellipsometry technique. Principal component analysis (PCA) and principal component regression (PCR) were used to identify a correlation between the intensities of the secondary ion peaks and the surface protein densities. From the ToF-SIMS and ellipsometry results, a good linear correlation of protein density was found. Our study shows that surface protein densities are higher on dendrimer-activated surfaces than on SAMs surfaces due to the spherical property of the dendrimer, and that these surface protein densities can be easily quantified with high sensitivity in a label-free manner by ToF-SIMS.

  20. Exploring patterns enriched in a dataset with contrastive principal component analysis.

    PubMed

    Abid, Abubakar; Zhang, Martin J; Bagaria, Vivek K; Zou, James

    2018-05-30

    Visualization and exploration of high-dimensional data is a ubiquitous challenge across disciplines. Widely used techniques such as principal component analysis (PCA) aim to identify dominant trends in one dataset. However, in many settings we have datasets collected under different conditions, e.g., a treatment and a control experiment, and we are interested in visualizing and exploring patterns that are specific to one dataset. This paper proposes a method, contrastive principal component analysis (cPCA), which identifies low-dimensional structures that are enriched in a dataset relative to comparison data. In a wide variety of experiments, we demonstrate that cPCA with a background dataset enables us to visualize dataset-specific patterns missed by PCA and other standard methods. We further provide a geometric interpretation of cPCA and strong mathematical guarantees. An implementation of cPCA is publicly available, and can be used for exploratory data analysis in many applications where PCA is currently used.

  1. Variability search in M 31 using principal component analysis and the Hubble Source Catalogue

    NASA Astrophysics Data System (ADS)

    Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.

    2018-06-01

    Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.

  2. A Genealogical Interpretation of Principal Components Analysis

    PubMed Central

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  3. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  4. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  5. Spatial and temporal variability of hyperspectral signatures of terrain

    NASA Astrophysics Data System (ADS)

    Jones, K. F.; Perovich, D. K.; Koenig, G. G.

    2008-04-01

    Electromagnetic signatures of terrain exhibit significant spatial heterogeneity on a range of scales as well as considerable temporal variability. A statistical characterization of the spatial heterogeneity and spatial scaling algorithms of terrain electromagnetic signatures are required to extrapolate measurements to larger scales. Basic terrain elements including bare soil, grass, deciduous, and coniferous trees were studied in a quasi-laboratory setting using instrumented test sites in Hanover, NH and Yuma, AZ. Observations were made using a visible and near infrared spectroradiometer (350 - 2500 nm) and hyperspectral camera (400 - 1100 nm). Results are reported illustrating: i) several difference scenes; ii) a terrain scene time series sampled over an annual cycle; and iii) the detection of artifacts in scenes. A principal component analysis indicated that the first three principal components typically explained between 90 and 99% of the variance of the 30 to 40-channel hyperspectral images. Higher order principal components of hyperspectral images are useful for detecting artifacts in scenes.

  6. Temporal trend and climate factors of hemorrhagic fever with renal syndrome epidemic in Shenyang City, China

    PubMed Central

    2011-01-01

    Background Hemorrhagic fever with renal syndrome (HFRS) is an important infectious disease caused by different species of hantaviruses. As a rodent-borne disease with a seasonal distribution, external environmental factors including climate factors may play a significant role in its transmission. The city of Shenyang is one of the most seriously endemic areas for HFRS. Here, we characterized the dynamic temporal trend of HFRS, and identified climate-related risk factors and their roles in HFRS transmission in Shenyang, China. Methods The annual and monthly cumulative numbers of HFRS cases from 2004 to 2009 were calculated and plotted to show the annual and seasonal fluctuation in Shenyang. Cross-correlation and autocorrelation analyses were performed to detect the lagged effect of climate factors on HFRS transmission and the autocorrelation of monthly HFRS cases. Principal component analysis was constructed by using climate data from 2004 to 2009 to extract principal components of climate factors to reduce co-linearity. The extracted principal components and autocorrelation terms of monthly HFRS cases were added into a multiple regression model called principal components regression model (PCR) to quantify the relationship between climate factors, autocorrelation terms and transmission of HFRS. The PCR model was compared to a general multiple regression model conducted only with climate factors as independent variables. Results A distinctly declining temporal trend of annual HFRS incidence was identified. HFRS cases were reported every month, and the two peak periods occurred in spring (March to May) and winter (November to January), during which, nearly 75% of the HFRS cases were reported. Three principal components were extracted with a cumulative contribution rate of 86.06%. Component 1 represented MinRH0, MT1, RH1, and MWV1; component 2 represented RH2, MaxT3, and MAP3; and component 3 represented MaxT2, MAP2, and MWV2. The PCR model was composed of three principal components and two autocorrelation terms. The association between HFRS epidemics and climate factors was better explained in the PCR model (F = 446.452, P < 0.001, adjusted R2 = 0.75) than in the general multiple regression model (F = 223.670, P < 0.000, adjusted R2 = 0.51). Conclusion The temporal distribution of HFRS in Shenyang varied in different years with a distinctly declining trend. The monthly trends of HFRS were significantly associated with local temperature, relative humidity, precipitation, air pressure, and wind velocity of the different previous months. The model conducted in this study will make HFRS surveillance simpler and the control of HFRS more targeted in Shenyang. PMID:22133347

  7. 14 CFR 119.47 - Maintaining a principal base of operations, main operations base, and main maintenance base...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Maintaining a principal base of operations, main operations base, and main maintenance base; change of address. 119.47 Section 119.47 Aeronautics... Under Part 121 or Part 135 of This Chapter § 119.47 Maintaining a principal base of operations, main...

  8. 14 CFR 119.47 - Maintaining a principal base of operations, main operations base, and main maintenance base...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Maintaining a principal base of operations, main operations base, and main maintenance base; change of address. 119.47 Section 119.47 Aeronautics... Under Part 121 or Part 135 of This Chapter § 119.47 Maintaining a principal base of operations, main...

  9. 14 CFR 119.47 - Maintaining a principal base of operations, main operations base, and main maintenance base...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Maintaining a principal base of operations, main operations base, and main maintenance base; change of address. 119.47 Section 119.47 Aeronautics... Under Part 121 or Part 135 of This Chapter § 119.47 Maintaining a principal base of operations, main...

  10. 14 CFR 119.47 - Maintaining a principal base of operations, main operations base, and main maintenance base...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Maintaining a principal base of operations, main operations base, and main maintenance base; change of address. 119.47 Section 119.47 Aeronautics... Under Part 121 or Part 135 of This Chapter § 119.47 Maintaining a principal base of operations, main...

  11. Reduced nonlinear prognostic model construction from high-dimensional data

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2017-04-01

    Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  12. HBCUs Research Conference Agenda and Abstracts

    NASA Technical Reports Server (NTRS)

    Dutta, Sunil (Compiler)

    1997-01-01

    The purpose of this Historically Black Colleges and Universities (HBCUS) Research Conference was to provide an opportunity for principal investigators and their students to present research progress reports. The abstracts included in this report indicate the range and quality of research topics such as aeropropulsion, space propulsion, space power, fluid dynamics, designs, structures and materials being funded through grants from Lewis Research Center to HBCUS. The conference generated extensive networking between students, principal investigators, Lewis technical monitors, and other Lewis researchers.

  13. HBCUs Research Conference Agenda and Abstracts

    NASA Technical Reports Server (NTRS)

    Dutta, Sunil (Compiler)

    1998-01-01

    The purpose of this Historically Black Colleges and Universities (HBCUs) Research Conference was to provide an opportunity for principal investigators and their students to present research progress reports. The abstracts included in this report indicate the range and quality of research topics such as aeropropulsion, space propulsion, space power, fluid dynamics, designs, structures and materials being funded through grants from Lewis Research Center to HBCUs. The conference generated extensive networking between students, principal investigators, Lewis technical monitors, and other Lewis researchers.

  14. HBCUs Research Conference agenda and abstracts

    NASA Technical Reports Server (NTRS)

    Dutta, Sunil (Compiler)

    1995-01-01

    The purpose of this Historically Black Colleges and Universities (HBCUs) Research conference was to provide an opportunity for principal investigators and their students to present research progress reports. The abstracts included in this report indicate the range and quality of research topics such as aeropropulsion, space propulsion, space power, fluid dynamics, designs, structures and materials being funded through grants from Lewis Research Center to HBCUs. The conference generated extensive networking between students, principal investigators, Lewis technical monitors, and other Lewis researchers.

  15. Advances in space robotics

    NASA Technical Reports Server (NTRS)

    Varsi, Giulio

    1989-01-01

    The problem of the remote control of space operations is addressed by identifying the key technical challenge: the management of contact forces and the principal performance parameters. Three principal classes of devices for remote operation are identified: anthropomorphic exoskeletons, computer aided teleoperators, and supervised telerobots. Their fields of application are described, and areas in which progress has reached the level of system or subsystem laboratory demonstrations are indicated. Key test results, indicating performance at a level useful for design tradeoffs, are reported.

  16. Localized spatially nonlinear matter waves in atomic-molecular Bose-Einstein condensates with space-modulated nonlinearity

    PubMed Central

    Yao, Yu-Qin; Li, Ji; Han, Wei; Wang, Deng-Shan; Liu, Wu-Ming

    2016-01-01

    The intrinsic nonlinearity is the most remarkable characteristic of the Bose-Einstein condensates (BECs) systems. Many studies have been done on atomic BECs with time- and space- modulated nonlinearities, while there is few work considering the atomic-molecular BECs with space-modulated nonlinearities. Here, we obtain two kinds of Jacobi elliptic solutions and a family of rational solutions of the atomic-molecular BECs with trapping potential and space-modulated nonlinearity and consider the effect of three-body interaction on the localized matter wave solutions. The topological properties of the localized nonlinear matter wave for no coupling are analysed: the parity of nonlinear matter wave functions depends only on the principal quantum number n, and the numbers of the density packets for each quantum state depend on both the principal quantum number n and the secondary quantum number l. When the coupling is not zero, the localized nonlinear matter waves given by the rational function, their topological properties are independent of the principal quantum number n, only depend on the secondary quantum number l. The Raman detuning and the chemical potential can change the number and the shape of the density packets. The stability of the Jacobi elliptic solutions depends on the principal quantum number n, while the stability of the rational solutions depends on the chemical potential and Raman detuning. PMID:27403634

  17. Quantum Bundle Description of Quantum Projective Spaces

    NASA Astrophysics Data System (ADS)

    Ó Buachalla, Réamonn

    2012-12-01

    We realise Heckenberger and Kolb's canonical calculus on quantum projective ( N - 1)-space C q [ C p N-1] as the restriction of a distinguished quotient of the standard bicovariant calculus for the quantum special unitary group C q [ SU N ]. We introduce a calculus on the quantum sphere C q [ S 2 N-1] in the same way. With respect to these choices of calculi, we present C q [ C p N-1] as the base space of two different quantum principal bundles, one with total space C q [ SU N ], and the other with total space C q [ S 2 N-1]. We go on to give C q [ C p N-1] the structure of a quantum framed manifold. More specifically, we describe the module of one-forms of Heckenberger and Kolb's calculus as an associated vector bundle to the principal bundle with total space C q [ SU N ]. Finally, we construct strong connections for both bundles.

  18. Construct validity of the abbreviated mental test in older medical inpatients.

    PubMed

    Antonelli Incalzi, R; Cesari, M; Pedone, C; Carosella, L; Carbonin, P U

    2003-01-01

    To evaluate validity and internal structure of the Abbreviated Mental Test (AMT), and to assess the dependence of the internal structure upon the characteristics of the patients examined. Cross-sectional examination using data from the Italian Group of Pharmacoepidemiology in the Elderly (GIFA) database. Twenty-four acute care wards of Geriatrics or General Medicine. Two thousand eight hundred and eight patients consecutively admitted over a 4-month period. Demographic characteristics, functional status, medical conditions and performance on AMT were collected at discharge. Sensitivity, specificity and predictive values of the AMT <7 versus a diagnosis of dementia made according to DSM-III-R criteria were computed. The internal structure of AMT was assessed by principal component analysis. The analysis was performed on the whole population and stratified for age (<65, 65-80 and >80 years), gender, education (<6 or >5 years) and presence of congestive heart failure (CHF). AMT achieved high sensitivity (81%), specificity (84%) and negative predictive value (99%), but a low positive predictive value of 25%. The principal component analysis isolated two components: the former component represents the orientation to time and space and explains 45% of AMT variance; the latter is linked to memory and attention and explains 13% of variance. Comparable results were obtained after stratification by age, gender or education. In patients with CHF, only 48.3% of the cumulative variance was explained; the factor accounting for most (34.6%) of the variance explained was mainly related to the three items assessing memory. AMT >6 rules out dementia very reliably, whereas AMT <7 requires a second level cognitive assessment to confirm dementia. AMT is bidimensional and maintains the same internal structure across classes defined by selected social and demographic characteristics, but not in CHF patients. It is likely that its internal structure depends on the type of patients. The use of a sum-score could conceal some part of the information provided by the AMT. Copyright 2003 S. Karger AG, Basel

  19. Animal reservoir, natural and socioeconomic variations and the transmission of hemorrhagic fever with renal syndrome in Chenzhou, China, 2006-2010.

    PubMed

    Xiao, Hong; Tian, Huai-Yu; Gao, Li-Dong; Liu, Hai-Ning; Duan, Liang-Song; Basta, Nicole; Cazelles, Bernard; Li, Xiu-Jun; Lin, Xiao-Ling; Wu, Hong-Wei; Chen, Bi-Yun; Yang, Hui-Suo; Xu, Bing; Grenfell, Bryan

    2014-01-01

    China has the highest incidence of hemorrhagic fever with renal syndrome (HFRS) worldwide. Reported cases account for 90% of the total number of global cases. By 2010, approximately 1.4 million HFRS cases had been reported in China. This study aimed to explore the effect of the rodent reservoir, and natural and socioeconomic variables, on the transmission pattern of HFRS. Data on monthly HFRS cases were collected from 2006 to 2010. Dynamic rodent monitoring data, normalized difference vegetation index (NDVI) data, climate data, and socioeconomic data were also obtained. Principal component analysis was performed, and the time-lag relationships between the extracted principal components and HFRS cases were analyzed. Polynomial distributed lag (PDL) models were used to fit and forecast HFRS transmission. Four principal components were extracted. Component 1 (F1) represented rodent density, the NDVI, and monthly average temperature. Component 2 (F2) represented monthly average rainfall and monthly average relative humidity. Component 3 (F3) represented rodent density and monthly average relative humidity. The last component (F4) represented gross domestic product and the urbanization rate. F2, F3, and F4 were significantly correlated, with the monthly HFRS incidence with lags of 4 months (r = -0.289, P<0.05), 5 months (r = -0.523, P<0.001), and 0 months (r = -0.376, P<0.01), respectively. F1 was correlated with the monthly HFRS incidence, with a lag of 4 months (r = 0.179, P = 0.192). Multivariate PDL modeling revealed that the four principal components were significantly associated with the transmission of HFRS. The monthly trend in HFRS cases was significantly associated with the local rodent reservoir, climatic factors, the NDVI, and socioeconomic conditions present during the previous months. The findings of this study may facilitate the development of early warning systems for the control and prevention of HFRS and similar diseases.

  20. Multivariate classification of small order watersheds in the Quabbin Reservoir Basin, Massachusetts

    USGS Publications Warehouse

    Lent, R.M.; Waldron, M.C.; Rader, J.C.

    1998-01-01

    A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.

  1. Influential Observations in Principal Factor Analysis.

    ERIC Educational Resources Information Center

    Tanaka, Yutaka; Odaka, Yoshimasa

    1989-01-01

    A method is proposed for detecting influential observations in iterative principal factor analysis. Theoretical influence functions are derived for two components of the common variance decomposition. The major mathematical tool is the influence function derived by Tanaka (1988). (SLD)

  2. Exploring the Intentions and Practices of Principals Regarding Inclusive Education: An Application of the Theory of Planned Behaviour

    ERIC Educational Resources Information Center

    Yan, Zi; Sin, Kuen-fung

    2015-01-01

    This study aimed at providing explanation and prediction of principals' inclusive education intentions and practices under the framework of the Theory of Planned Behaviour (TPB). A sample of 209 principals from Hong Kong schools was surveyed using five scales that were developed to assess the five components of TPB: attitude, subjective norm,…

  3. Encounter complexes and dimensionality reduction in protein-protein association.

    PubMed

    Kozakov, Dima; Li, Keyong; Hall, David R; Beglov, Dmitri; Zheng, Jiefu; Vakili, Pirooz; Schueler-Furman, Ora; Paschalidis, Ioannis Ch; Clore, G Marius; Vajda, Sandor

    2014-04-08

    An outstanding challenge has been to understand the mechanism whereby proteins associate. We report here the results of exhaustively sampling the conformational space in protein-protein association using a physics-based energy function. The agreement between experimental intermolecular paramagnetic relaxation enhancement (PRE) data and the PRE profiles calculated from the docked structures shows that the method captures both specific and non-specific encounter complexes. To explore the energy landscape in the vicinity of the native structure, the nonlinear manifold describing the relative orientation of two solid bodies is projected onto a Euclidean space in which the shape of low energy regions is studied by principal component analysis. Results show that the energy surface is canyon-like, with a smooth funnel within a two dimensional subspace capturing over 75% of the total motion. Thus, proteins tend to associate along preferred pathways, similar to sliding of a protein along DNA in the process of protein-DNA recognition. DOI: http://dx.doi.org/10.7554/eLife.01370.001.

  4. Encounter complexes and dimensionality reduction in protein–protein association

    PubMed Central

    Kozakov, Dima; Li, Keyong; Hall, David R; Beglov, Dmitri; Zheng, Jiefu; Vakili, Pirooz; Schueler-Furman, Ora; Paschalidis, Ioannis Ch; Clore, G Marius; Vajda, Sandor

    2014-01-01

    An outstanding challenge has been to understand the mechanism whereby proteins associate. We report here the results of exhaustively sampling the conformational space in protein–protein association using a physics-based energy function. The agreement between experimental intermolecular paramagnetic relaxation enhancement (PRE) data and the PRE profiles calculated from the docked structures shows that the method captures both specific and non-specific encounter complexes. To explore the energy landscape in the vicinity of the native structure, the nonlinear manifold describing the relative orientation of two solid bodies is projected onto a Euclidean space in which the shape of low energy regions is studied by principal component analysis. Results show that the energy surface is canyon-like, with a smooth funnel within a two dimensional subspace capturing over 75% of the total motion. Thus, proteins tend to associate along preferred pathways, similar to sliding of a protein along DNA in the process of protein-DNA recognition. DOI: http://dx.doi.org/10.7554/eLife.01370.001 PMID:24714491

  5. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  6. Exploring the effect of asymmetric mitochondrial DNA introgression on estimating niche divergence in morphologically cryptic species.

    PubMed

    Wielstra, Ben; Arntzen, Jan W

    2014-01-01

    If potential morphologically cryptic species, identified based on differentiated mitochondrial DNA, express ecological divergence, this increases support for their treatment as distinct species. However, mitochondrial DNA introgression hampers the correct estimation of ecological divergence. We test the hypothesis that estimated niche divergence differs when considering nuclear DNA composition or mitochondrial DNA type as representing the true species range. We use empirical data of two crested newt species (Amphibia: Triturus) which possess introgressed mitochondrial DNA from a third species in part of their ranges. We analyze the data in environmental space by determining Fisher distances in a principal component analysis and in geographical space by determining geographical overlap of species distribution models. We find that under mtDNA guidance in one of the two study cases niche divergence is overestimated, whereas in the other it is underestimated. In the light of our results we discuss the role of estimated niche divergence in species delineation.

  7. KSC-2011-7876

    NASA Image and Video Library

    2011-11-22

    CAPE CANAVERAL, Fla. – At NASA’s Kennedy Space Center in Florida, several scientists and researchers participate in a “Looking for Signs of Life in the Universe” news conference, Nov. 22, as part of preflight activities for the Mars Science Laboratory (MSL) mission. From left, are NASA Astrobiology Director Mary Voytek; Professor Jamie Foster from the Department of Microbiology and Cell Science at the University of Florida in Gainesville; MSL Deputy Principal Investigator Pan Conrad; Director of the Foundation for Applied Molecular Evolution Steven Benner; and NASA Planetary Protection Officer Catharine Conley. MSL’s components include a car-sized rover, Curiosity, which has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Launch of MSL aboard a United Launch Alliance Atlas V rocket is targeted for Nov. 26 from Space Launch Complex 41 on Cape Canaveral Air Force Station in Florida. For more information, visit http://www.nasa.gov/msl. Photo credit: NASA/Kim Shiflett

  8. Using Model-Based Reasoning for Autonomous Instrument Operation - Lessons Learned From IMAGE/LENA

    NASA Technical Reports Server (NTRS)

    Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Bailin, Sidney C.

    2001-01-01

    Model-based reasoning has been applied as an autonomous control strategy on the Low Energy Neutral Atom (LENA) instrument currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. Explicit models of instrument subsystem responses have been constructed and are used to dynamically adapt the instrument to the spacecraft's environment. These functions are cast as part of a Virtual Principal Investigator (VPI) that autonomously monitors and controls the instrument. In the VPI's current implementation, LENA's command uplink volume has been decreased significantly from its previous volume; typically, no uplinks are required for operations. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. The components of LENA are common in space science instrumentation, and lessons learned by modeling this system may be applied to other instruments. Future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.

  9. Learning Space Service Design

    ERIC Educational Resources Information Center

    Felix, Elliot

    2011-01-01

    Much progress has been made in creating informal learning spaces that incorporate technology and flexibly support a variety of activities. This progress has been principally in designing the right combination of furniture, technology, and space. However, colleges and universities do not design services within learning spaces with nearly the same…

  10. Biomechanical implications of intraspecific shape variation in chimpanzee crania: moving towards an integration of geometric morphometrics and finite element analysis

    PubMed Central

    Smith, Amanda L.; Benazzi, Stefano; Ledogar, Justin A.; Tamvada, Kelli; Smith, Leslie C. Pryor; Weber, Gerhard W.; Spencer, Mark A.; Dechow, Paul C.; Grosse, Ian R.; Ross, Callum F.; Richmond, Brian G.; Wright, Barth W.; Wang, Qian; Byron, Craig; Slice, Dennis E.; Strait, David S.

    2014-01-01

    In a broad range of evolutionary studies, an understanding of intraspecific variation is needed in order to contextualize and interpret the meaning of variation between species. However, mechanical analyses of primate crania using experimental or modeling methods typically encounter logistical constraints that force them to rely on data gathered from only one or a few individuals. This results in a lack of knowledge concerning the mechanical significance of intraspecific shape variation that limits our ability to infer the significance of interspecific differences. This study uses geometric morphometric methods (GM) and finite element analysis (FEA) to examine the biomechanical implications of shape variation in chimpanzee crania, thereby providing a comparative context in which to interpret shape-related mechanical variation between hominin species. Six finite element models (FEMs) of chimpanzee crania were constructed from CT scans following shape-space Principal Component Analysis (PCA) of a matrix of 709 Procrustes coordinates (digitized onto 21 specimens) to identify the individuals at the extremes of the first three principal components. The FEMs were assigned the material properties of bone and were loaded and constrained to simulate maximal bites on the P3 and M2. Resulting strains indicate that intraspecific cranial variation in morphology is associated with quantitatively high levels of variation in strain magnitudes, but qualitatively little variation in the distribution of strain concentrations. Thus, interspecific comparisons should include considerations of the spatial patterning of strains rather than focus only their magnitude. PMID:25529239

  11. Effects of mutation, truncation and temperature on the folding kinetics of a WW domain

    PubMed Central

    Maisuradze, Gia G.; Zhou, Rui; Liwo, Adam; Xiao, Yi; Scheraga, Harold A.

    2013-01-01

    The purpose of this work is to show how mutation, truncation and change of temperature can influence the folding kinetics of a protein. This is accomplished by principal component analysis (PCA) of molecular dynamics (MD)-generated folding trajectories of the triple β-strand WW domain from the Formin binding protein 28 (FBP) [PDB: 1E0L] and its full-size, and singly- and doubly-truncated mutants at temperatures below and very close to the melting point. The reasons for biphasic folding kinetics [i.e., coexistence of slow (three-state) and fast (two-state) phases], including the involvement of a solvent-exposed hydrophobic cluster and another delocalized hydrophobic core in the folding kinetics, are discussed. New folding pathways are identified in free-energy landscapes determined in terms of principal components for full-size mutants. Three-state folding is found to be a main mechanism for folding FBP28 WW domain and most of the full-size and truncated mutants. The results from the theoretical analysis are compared to those from experiment. Agreements and discrepancies between the theoretical and experimental results are discussed. Because of its importance in understanding protein kinetics and function, the diffusive mechanism by which FBP28 WW domain and its full-size and truncated mutants explore their conformational space is examined in terms of the mean-square displacement, (MSD), and PCA eigenvalue spectrum analyses. Subdiffusive behavior is observed for all studied systems. PMID:22560992

  12. Space-Based Measurements of CO2 from the Japanese Greenhouse Gases Observing Satellite (GOSAT) and the NASA Orbiting Carbon Observatory-2 (OCO-2) Missions

    NASA Technical Reports Server (NTRS)

    Crisp, David

    2011-01-01

    Space-based remote sensing observations hold substantial promise for future long-term monitoring of CO2 and other greenhouse gases. The principal advantages of space based measurements include: (1) Spatial coverage (especially over oceans and tropical land) (2) Sampling density (needed to resolve CO2 weather). The principal challenge is the need for high precision To reach their full potential, space based CO2 measurements must be validated against surface measurements to ensure their accuracy. The TCCON network is providing the transfer standard There is a need for a long-term vision to establish and address community priorities (1) Must incorporate ground, air, space-based assets and models (2) Must balance calls for new observations with need to maintain climate data records.

  13. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE PAGES

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.; ...

    2015-03-03

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  14. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  15. Machine learning and data science in soft materials engineering

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.

    2018-01-01

    In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.

  16. Machine learning and data science in soft materials engineering.

    PubMed

    Ferguson, Andrew L

    2018-01-31

    In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.

  17. Self organising maps for visualising and modelling

    PubMed Central

    2012-01-01

    The paper describes the motivation of SOMs (Self Organising Maps) and how they are generally more accessible due to the wider available modern, more powerful, cost-effective computers. Their advantages compared to Principal Components Analysis and Partial Least Squares are discussed. These allow application to non-linear data, are not so dependent on least squares solutions, normality of errors and less influenced by outliers. In addition there are a wide variety of intuitive methods for visualisation that allow full use of the map space. Modern problems in analytical chemistry include applications to cultural heritage studies, environmental, metabolomic and biological problems result in complex datasets. Methods for visualising maps are described including best matching units, hit histograms, unified distance matrices and component planes. Supervised SOMs for classification including multifactor data and variable selection are discussed as is their use in Quality Control. The paper is illustrated using four case studies, namely the Near Infrared of food, the thermal analysis of polymers, metabolomic analysis of saliva using NMR, and on-line HPLC for pharmaceutical process monitoring. PMID:22594434

  18. Topological patterns of mesh textures in serpentinites

    NASA Astrophysics Data System (ADS)

    Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.

    2017-12-01

    Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.

  19. The risk of misclassifying subjects within principal component based asset index

    PubMed Central

    2014-01-01

    The asset index is often used as a measure of socioeconomic status in empirical research as an explanatory variable or to control confounding. Principal component analysis (PCA) is frequently used to create the asset index. We conducted a simulation study to explore how accurately the principal component based asset index reflects the study subjects’ actual poverty level, when the actual poverty level is generated by a simple factor analytic model. In the simulation study using the PC-based asset index, only 1% to 4% of subjects preserved their real position in a quintile scale of assets; between 44% to 82% of subjects were misclassified into the wrong asset quintile. If the PC-based asset index explained less than 30% of the total variance in the component variables, then we consistently observed more than 50% misclassification across quintiles of the index. The frequency of misclassification suggests that the PC-based asset index may not provide a valid measure of poverty level and should be used cautiously as a measure of socioeconomic status. PMID:24987446

  20. Machine learning of frustrated classical spin models. I. Principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

Top